Monographs in Mathematics 108

Jussi Behrndt Seppo Hassi Henk de Snoo

# Boundary Value Problems, Weyl Functions, and Differential Operators

## Monographs in Mathematics

Volume 108

#### Series Editors

Herbert Amann, Universität Zürich, Zürich, Switzerland Jean-Pierre Bourguignon, IHES, Bures-sur-Yvette, France William Y. C. Chen, Nankai University, Tianjin, China

#### Associate Editor

Huzihiro Araki, Kyoto University, Kyoto, Japan John Ball, Heriot-Watt University, Edinburgh, UK Franco Brezzi, Università degli Studi di Pavia, Pavia, Italy Kung Ching Chang, Peking University, Beijing, China Nigel Hitchin, University of Oxford, Oxford, UK Helmut Hofer, Courant Institute of Mathematical Sciences, New York, USA Horst Knörrer, ETH Zürich, Zürich, Switzerland Don Zagier, Max-Planck-Institut, Bonn, Germany

The foundations of this outstanding book series were laid in 1944. Until the end of the 1970s, a total of 77 volumes appeared, including works of such distinguished mathematicians as Carathéodory, Nevanlinna and Shafarevich, to name a few. The series came to its name and present appearance in the 1980s. In keeping its well-established tradition, only monographs of excellent quality are published in this collection. Comprehensive, in-depth treatments of areas of current interest are presented to a readership ranging from graduate students to professional mathematicians. Concrete examples and applications both within and beyond the immediate domain of mathematics illustrate the import and consequences of the theory under discussion.

More information about this series at http://www.springer.com/series/4843

Jussi Behrndt • Seppo Hassi • Henk de Snoo

## Boundary Value Problems, Weyl Functions, and Differential Operators

Jussi Behrndt Institut für Angewandte Mathematik Technische Universität Graz Graz, Austria

Seppo Hassi Mathematics and Statistics University of Vaasa Vaasa, Finland

Henk de Snoo Bernoulli Institute for Mathematics Computer Science and Artificial Intelligence University of Groningen Groningen, The Netherlands

ISSN 1017-0480 ISSN 2296-4886 (electronic) Monographs in Mathematics ISBN 978-3-030-36713-8 ISBN 978-3-030-36714-5 (eBook) https://doi.org/10.1007/978-3-030-36714-5

Mathematics Subject Classification (2010): 47A, 47B, 47E, 47F, 34B, 34L, 35P, 81C, 93B

© The Editor(s) (if applicable) and The Author(s) 2020. This book is an open access publication. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG

The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

## **Contents**




## **Preface**

This monograph is about boundary value problems, Weyl functions, and differential operators. It grew out of a number of courses and seminars on functional analysis, operator theory, and differential equations, which the authors have given over a long period of time at various institutions. The project goes back to 2005 with a course on extension theory of symmetric operators, boundary triplets, and Weyl functions given at TU Berlin, while an extended form of the course was presented in 2006/2007 at the University of Groningen. Many more such courses and seminars, often on special topics, would follow at TU Berlin, Jagiellonian University in Krak´ow, and, since 2011, at TU Graz.

The authors wish to thank all the students, PhD students, and postdocs who have attended these lectures; their critical questions and comments have led to numerous improvements. They have shown that lectures at the blackboard provide the ultimate test for the quality of the material. In particular, we mention Bernhard Gsell, Markus Holzmann, Christian K¨uhn, Vladimir Lotoreichik, Jonathan Rohleder, Peter Schlosser, Philipp Schmitz, Simon Stadler, Alef Sterk, and Rudi Wietsma. It is our experience that the individual chapters of this monograph can be used (with small additions from some of the other chapters) for independent courses on the respective topics.

The book has benefited from our collaboration with many different colleagues. We would like to single out our friends and faithful coauthors Yuri Arlinski˘ı, Vladimir Derkach, Peter Jonas, Matthias Langer, Annemarie Luger, Mark Malamud, Hagen Neidhardt, Franek Szafraniec, Carsten Trunk, Henrik Winkler, and Harald Woracek. Special thanks go to Fritz Gesztesy, Gerd Grubb, Heinz Langer, and James Rovnyak, who have responded to our queries concerning historical developments and references.

We gratefully acknowledge the support of the following institutions: Deutsche Forschungsgemeinschaft, Jagiellonian University, TU Berlin, and TU Graz. We would like to thank the Mathematisches Forschungsinstitut Oberwolfach and the Mittag-Leffler Institute in Djursholm for their hospitality during the final stages of the preparation of this book. Finally, we are indebted to the Austrian Science Fund (Grant PUB 683-Z) and the University of Vaasa for funding the open access publication of this monograph.

Jussi Behrndt, Seppo Hassi, and Henk de Snoo

## **Introduction**

In this monograph the theory of boundary triplets and their Weyl functions is developed and applied to the analysis of boundary value problems for differential equations and general operators in Hilbert spaces. Concrete illustrations by means of weighted Sturm–Liouville differential operators, canonical systems of differential equations, and multidimensional Schr¨odinger operators are provided. The abstract notions of boundary triplets and Weyl functions have their roots in the theory of ordinary differential operators; they appear in a slightly different context also in the treatment of partial differential operators.

Before describing the contents of the monograph it may be helpful to explain the ideas in this text by means of the following simple Sturm–Liouville differential expression

$$L = -\frac{d^2}{dx^2} + V,\tag{1}$$

where it is assumed that the potential V is a real measurable function. The context in which this differential expression will be placed serves as an example as well as a motivation. The first step is to associate with L some differential operators in a suitable Hilbert space. Assume, e.g., that (1) is given on the positive half-line <sup>R</sup><sup>+</sup> = (0, <sup>∞</sup>) and assume for simplicity that the real function <sup>V</sup> is bounded. Define the linear space Dmax by

$$\mathfrak{D}\_{\max} = \left\{ f \in L^2(\mathbb{R}^+) : f, f' \text{ absolutely continuous, } Lf \in L^2(\mathbb{R}^+) \right\}$$

and define the minimal operator S associated with L by

$$Sf = -f'' + Vf, \qquad \text{dom}\, S = \left\{ f \in \mathfrak{D}\_{\text{max}} \, : \, f(0) = f'(0) = 0 \right\}.$$

Then S is a closed densely defined symmetric operator L2(R+); in fact, it is the closure of (the graph of) the restriction of S to C<sup>∞</sup> <sup>0</sup> (R+). It can be shown that the adjoint operator S<sup>∗</sup> is given by

$$S^\*f = -f'' + Vf, \qquad \text{dom}\, S^\* = \mathfrak{D}\_{\text{max}},$$

which is usually called the maximal operator associated with L. Roughly speaking, S is a two-dimensional restriction of S<sup>∗</sup> by means of the boundary conditions f(0) = 0 and f- (0) = 0. Note that the maximal domain Dmax coincides with the second-order Sobolev space H<sup>2</sup>(R<sup>+</sup>).

The notion of boundary triplet will now be explained in the present situation. For this consider f,g ∈ dom S<sup>∗</sup> and observe that integration by parts leads to

$$\begin{aligned} (S^\*f,g)\_{L^2(\mathbb{R}^+)} - (f,S^\*g)\_{L^2(\mathbb{R}^+)} &= -f'(x)\overline{g(x)}\Big|\_0^\infty + f(x)\overline{g'(x)}\Big|\_0^\infty \\ &= f'(0)\overline{g(0)} - f(0)\overline{g'(0)}, \end{aligned}$$

where it was used that the products f- g and fg vanish at ∞. Inspired by the above identity, define boundary mappings

$$\Gamma\_0, \Gamma\_1: \text{dom}\, S^\* \to \mathbb{C}, \quad f \mapsto \Gamma\_0 f := f(0) \quad \text{and} \quad f \mapsto \Gamma\_1 f := f'(0), \tag{2}$$

so that for all f,g ∈ dom S<sup>∗</sup> one has

$$(S^\*f,g)\_{L^2(\mathbb{R}^+)} - (f,S^\*g)\_{L^2(\mathbb{R}^+)} = (\Gamma\_1f,\Gamma\_0g)\_{\mathbb{C}} - (\Gamma\_0f,\Gamma\_1g)\_{\mathbb{C}},\tag{3}$$

which is the so-called abstract Green identity in the definition of a boundary triplet; note that on the right-hand side of (3) the scalar product in the (boundary) Hilbert space C is used. This abstract Green identity is the key feature in the notion of a boundary triplet and it is primarily responsible for the succesful functioning of the whole theory. Note also that the combined boundary mapping

$$(\Gamma\_0, \Gamma\_1)^\top : \text{dom}\, S^\* \to \mathbb{C}^2$$

is surjective, which is understood as a maximality condition in the sense that the image space of the boundary maps is not unnecessarily large. Observe that one has dom S = ker Γ<sup>0</sup> ∩ ker Γ1. The operator realizations A of the Sturm–Liouville differential expression L which are intermediate extensions, that is, S ⊂ A ⊂ S∗, can be described by boundary conditions expressed via the boundary maps. More precisely, for <sup>τ</sup> <sup>∈</sup> <sup>C</sup> ∪ {∞} the operator <sup>A</sup><sup>τ</sup> is defined by

$$A\_{\tau}f = S^\*f, \qquad \text{dom}\, A\_{\tau} = \ker\left(\Gamma\_1 - \tau\Gamma\_0\right), \tag{4}$$

which in a more explicit form reads

$$A\_{\tau}f = -f'' + Vf, \qquad \text{dom}\, A\_{\tau} = \left\{ f \in \mathfrak{D}\_{\text{max}} : f'(0) = \tau f(0) \right\};$$

the case τ = ∞ is understood as the boundary condition ker Γ0, that is,

$$A\_{\infty}f = -f'' + Vf, \qquad \text{dom}\, A\_{\infty} = \left\{ f \in \mathfrak{D}\_{\text{max}} \, : \, f(0) = 0 \right\}.\tag{5}$$

In the definition (4) the quantity τ plays the role of a boundary parameter that links the boundary values Γ0f = f(0) and Γ1f = f- (0) of the functions f ∈ dom S∗, which determine the Dirichlet and Neumann boundary conditions, respectively. The properties of the boundary parameter are directly connected with

Introduction 3

the properties of the corresponding operator A<sup>τ</sup> ; in particular, the realization A<sup>τ</sup> is self-adjoint in <sup>L</sup><sup>2</sup>(R<sup>+</sup>) if and only if <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞}.

The next main goal is to motivate and illustrate the definition of the Weyl function as an analytic object corresponding to a boundary triplet, which is indispensable in the spectral theory of the intermediate extensions. For this, let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and consider first the unique solutions ϕ<sup>λ</sup> and ψ<sup>λ</sup> of the boundary value problems

$$\begin{aligned} -\varphi\_{\lambda}^{\prime\prime} + V\varphi\_{\lambda} &= \lambda \varphi\_{\lambda}, \qquad \varphi\_{\lambda}(0) = 1, & \varphi\_{\lambda}^{\prime}(0) = 0, \\ -\psi\_{\lambda}^{\prime\prime} + V\psi\_{\lambda} &= \lambda \psi\_{\lambda}, \qquad \psi\_{\lambda}(0) = 0, & \psi\_{\lambda}^{\prime}(0) = 1, \end{aligned} \tag{6}$$

and note that in general <sup>ϕ</sup>λ, ψ<sup>λ</sup> <sup>∈</sup> <sup>L</sup>2(R+). It was shown by H. Weyl more than a century ago that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> there exists <sup>m</sup>(λ) <sup>∈</sup> <sup>C</sup> such that

$$x \mapsto f\_{\lambda}(x) = \varphi\_{\lambda}(x) + m(\lambda)\psi\_{\lambda}(x) \in L^{2}(\mathbb{R}^{+}),\tag{7}$$

and it turned out that the function <sup>m</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> <sup>C</sup> is holomorphic and has a positive imaginary part in the upper half-plane C+. This function and its interplay with spectral theory were later studied extensively by E.C. Titchmarsh; hence the frequently used terminology Titchmarsh–Weyl m-function. It plays a key role in the spectral analysis of Sturm–Liouville differential operators. E.g., the (real) poles of m coincide with the isolated eigenvalues of the self-adjoint Dirichlet operator A<sup>∞</sup> in (5) and the absolutely continuous spectrum of A<sup>∞</sup> is, roughly speaking, given by those <sup>λ</sup> <sup>∈</sup> <sup>R</sup> for which Im <sup>m</sup>(<sup>λ</sup> <sup>+</sup> <sup>i</sup>0) <sup>&</sup>gt; 0. In a similar way one can also characterize the continuous spectrum, the embedded eigenvalues, and exclude singular continuous spectrum of A∞.

Observe that for each <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the function <sup>x</sup> → <sup>f</sup>λ(x) in (7) belongs to dom S<sup>∗</sup> = Dmax and that, in fact, −f-- <sup>λ</sup> <sup>+</sup> V f<sup>λ</sup> <sup>=</sup> λf<sup>λ</sup> for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>; in other words, <sup>f</sup><sup>λ</sup> <sup>∈</sup> ker (S<sup>∗</sup> <sup>−</sup> <sup>λ</sup>). Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>S</sup><sup>∗</sup> with the boundary mappings defined in (2). From the choice of ϕ<sup>λ</sup> and ψ<sup>λ</sup> in (6) it is clear that

$$m(\lambda)\Gamma\_0 f\_\lambda = m(\lambda)f\_\lambda(0) = m(\lambda) = \Gamma\_1 f\_\lambda, \quad f\_\lambda \in \ker(S^\* - \lambda). \tag{8}$$

In the general theory this identity is used as the definition of the Weyl function corresponding to a boundary triplet. In other words, the Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} is defined as the function <sup>m</sup> that satisfies (8) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> (and even for the possibly larger set of <sup>λ</sup> belonging to the resolvent set of the self-adjoint Dirichlet operator A∞) and hence coincides with the Titchmarsh–Weyl m-function introduced via (7). Here the Weyl function maps Dirichlet boundary values of <sup>L</sup>2-solutions of the equation <sup>−</sup>f-- <sup>λ</sup> + V f<sup>λ</sup> = λf<sup>λ</sup> onto the corresponding Neumann boundary values and therefore m(λ) acts formally like a Dirichlet-to-Neumann map. Besides the Weyl function, one associates to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} the so-called <sup>γ</sup>-field as the mapping <sup>γ</sup>(λ) : <sup>C</sup> <sup>→</sup> <sup>L</sup>2(R+) that assigns to a prescribed boundary value <sup>c</sup> <sup>∈</sup> <sup>C</sup> the solution h<sup>λ</sup> ∈ dom S<sup>∗</sup> of the boundary value problem

$$-h\_{\lambda}^{\prime\prime} + Vh\_{\lambda} = \lambda h\_{\lambda}, \qquad \Gamma\_0 h\_{\lambda} = h\_{\lambda}(0) = c.$$

Since γ(λ)c = h<sup>λ</sup> = cfλ, it is clear that m(λ)=Γ1γ(λ). Moreover, one can show with the help of the abstract Green identity that the adjoint <sup>γ</sup>(λ)<sup>∗</sup> : <sup>L</sup><sup>2</sup>(R<sup>+</sup>) <sup>→</sup> <sup>C</sup> is given by <sup>γ</sup>(λ)<sup>∗</sup> = Γ1(A∞−λ)−<sup>1</sup>. The Weyl function and <sup>γ</sup>-field associated to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} appear in the perturbation term in Kre˘ın's formula

$$(A\_\tau - \lambda)^{-1} = (A\_\infty - \lambda)^{-1} + \gamma(\lambda)(\tau - m(\lambda))^{-1}\gamma(\overline{\lambda})^\*,$$

where, for simplicity, it is assumed that A<sup>τ</sup> is a self-adjoint realization of L as in (4) corresponding to some boundary parameter <sup>τ</sup> <sup>∈</sup> <sup>R</sup> and <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A<sup>τ</sup> )∩ρ(A∞). Kre˘ın's formula in this particular case provides a description of the resolvent difference of A<sup>τ</sup> and the fixed self-adjoint extension A∞. It is important to note that γ(λ) and γ(λ)<sup>∗</sup> in the perturbation term provide a link between the original Hilbert space <sup>L</sup>2(R+) and the boundary space <sup>C</sup>, but do not affect the resolvents of <sup>A</sup><sup>∞</sup> and <sup>A</sup><sup>τ</sup> . Therefore, if <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0), then the singularities of the resolvent <sup>λ</sup> → (A<sup>τ</sup> <sup>−</sup>λ)−<sup>1</sup> are reflected in the singularities of the term <sup>λ</sup> → (<sup>τ</sup> <sup>−</sup> <sup>m</sup>(λ))−<sup>1</sup> and vice versa. In fact, the function <sup>λ</sup> → (<sup>τ</sup> <sup>−</sup> <sup>m</sup>(λ))−<sup>1</sup> is connected with the spectrum of <sup>A</sup><sup>τ</sup> in the same way as the function λ → m(λ) is connected with the spectrum of A∞.

There is another efficient technique to associate differential operators with the differential expression L, which is based on the sesquilinear form t corresponding to L,

$$\mathfrak{t}[f,g] = (f',g')\_{L^2(\mathbb{R}^+)} + (Vf,g)\_{L^2(\mathbb{R}^+)},\tag{9}$$

defined on, e.g.,

D = - <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(R+) : <sup>f</sup> absolutely continuous, f- <sup>∈</sup> <sup>L</sup>2(R+) , (10)

and the first representation theorem for sesquilinear forms. In fact, one verifies that t in (9)–(10) is a densely defined closed semibounded form in L2(R+), and hence there exists a uniquely determined self-adjoint operator S<sup>1</sup> with dom S<sup>1</sup> ⊂ D such that

$$(S\_1 f, g)\_{L^2(\mathbb{R}^+)} = \mathfrak{t}[f, g], \qquad f \in \text{dom}\, S\_1, g \in \text{dom}\, \mathfrak{D}.\tag{11}$$

Note that here the form domain D coincides with the first-order Sobolev space H1(R+). It can be shown that the self-adjoint operator S<sup>1</sup> is actually an extension of the minimal operator S. Instead of the domain D in (10) one may consider the sesquilinear form t on the smaller domain D<sup>0</sup> = {f ∈ D : f(0) = 0}, which also leads to a densely defined closed semibounded form in L2(R+). Again, via the first representation theorem, there is a corresponding self-adjoint operator S<sup>0</sup> with dom S<sup>0</sup> ⊂ D<sup>0</sup> determined by

$$(S\_0 f, g)\_{L^2(\mathbb{R}^+)} = \mathfrak{t}[f, g], \qquad f \in \text{dom}\, S\_0, \, g \in \text{dom}\, \mathfrak{D}\_0. \tag{12}$$

One verifies that the self-adjoint operator S<sup>1</sup> in (11) coincides with the self-adjoint realization of L determined by the boundary condition ker Γ<sup>1</sup> and that the selfadjoint operator S<sup>0</sup> in (12) coincides with the self-adjoint realization of L determined by the boundary condition ker Γ<sup>0</sup> in (4), that is, S<sup>1</sup> corresponds to the Introduction 5

boundary parameter τ = 0 and S<sup>0</sup> is the Dirichlet operator corresponding to the boundary parameter τ = ∞. Furthermore, in the situation discussed here the self-adjoint operator S<sup>0</sup> in (12) is the Friedrichs extension of the minimal (or preminimal) operator associated to L.

The concept of boundary triplet is supplemented by the notion of boundary pair, which is inspired by the form approach indicated above. More precisely, in the present situation it turns out that {G,Λ}, where <sup>G</sup> <sup>=</sup> <sup>C</sup> and

$$
\Lambda: \mathfrak{D} \to \mathbb{C}, \qquad f \mapsto \Lambda f := f(0), \tag{13}
$$

is a boundary pair for the minimal operator S (corresponding to S1). For this, one has to ensure that the mapping Λ defined on the form domain of S<sup>1</sup> is continuous with respect to the Hilbert space topology generated by the closed form t on D, and that ker Λ coincides with the form domain corresponding to the Friedrichs extension of S. Note also that in the present situation the mapping Λ in (13) is an extension of the boundary mapping Γ<sup>0</sup> : dom <sup>S</sup><sup>∗</sup> <sup>→</sup> <sup>C</sup> to the form domain <sup>D</sup>. With the help of the boundary pair {C,Λ} one can parametrize all densely defined closed semibounded forms corresponding to semibounded self-adjoint extensions of S via

$$\mathfrak{t}\_{\tau}[f,g] = \mathfrak{t}[f,g] + (\tau \Lambda f, \Lambda g)\_{\mathbb{C}}, \qquad f, g \in \mathfrak{D}, \tag{14}$$

where <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞}, and the case <sup>τ</sup> <sup>=</sup> <sup>∞</sup> corresponds to the boundary condition Λf = 0 in D0. The boundary pair and the boundary triplet are connected via the first Green identity

$$(S^\*f,g)\_{L^2(\mathbb{R}^+)} = \mathfrak{t}[f,g] + (\Gamma\_1 f, \Lambda g)\_{\mathbb{C}}, \quad f \in \text{dom}\, S^\*, \, g \in \mathfrak{D}.$$

The first Green identity makes it possible to identify the closed semibounded forms in (14) with the corresponding self-adjoint operator realizations A<sup>τ</sup> of L described via boundary conditions in (4). For f ∈ dom A<sup>τ</sup> and g ∈ D, the first Green identity reduces to

$$(A\_{\tau}f,g)\_{L^{2}(\mathbb{R}+)} = \mathfrak{t}[f,g] + (\tau \Gamma\_{0}f, \Lambda g)\_{\mathbb{C}} = \mathfrak{t}[f,g] + (\tau \Lambda f, \Lambda g)\_{\mathbb{C}},$$

and the expression (τΛf,Λg)<sup>C</sup> on the right-hand side can also be interpreted as a sesquilinear form in the boundary space C. In this sense the theory of boundary pairs for semibounded symmetric operators complements the theory of boundary triplets in a natural way: it provides a description of the closed semibounded forms corresponding to semibounded self-adjoint extensions of the minimal operator S.

Methods to treat Sturm–Liouville problems such as the one discussed above go back to H. Weyl [758, 759, 760], whose papers on this topic appeared in 1910/1911; see also [761]. The interpretation of a Sturm–Liouville expression as an operator in a Hilbert space can already be found in the 1932 book of M.H. Stone [724]. In this monograph Stone gave an abstract treatment of operators in a Hilbert space including the work of J. von Neumann [610, 611] from 1929 and 1932, who had also introduced the extension theory of densely defined symmetric operators and found the formulas which carry his name: self-adjoint extensions correspond to unitary mappings between the defect spaces. The von Neumann formulas are abstract, since they are formulated in terms of the defect spaces of the symmetric operator, and they needed to be related to concrete boundary value problems. With this in mind another approach involving abstract boundary conditions was developed by J.W. Calkin [187] in his 1937 Harvard doctoral dissertation, which was written under the direction of Stone, who suggested the topic. Calkin was also advised by von Neumann. Calkin's work on boundary value problems did not receive the attention it might have deserved. It seems that he never returned to it; his later mathematical work was related to World War II and the Manhattan project in Los Alamos.

Another way to deal with the self-adjoint extensions of a symmetric operator is via Kre˘ın's resolvent formula. The early background of this formula can be found in the idea of perturbation of self-adjoint operators. Kre˘ın's formula describes the resolvent of a self-adjoint extension in terms of the resolvent of a fixed self-adjoint extension and a perturbation term which involves a so-called Q-function and a parameter describing the self-adjoint extension. The Q-function uniquely determines the underlying symmetry and the fixed self-adjoint extension, up to unitary equivalence, and thus reflects their spectral properties. The original Kre˘ın formula for equal finite defect numbers goes back to M.G. Kre˘ın [491, 492] in the middle of the 1940s; only in 1965 it was finally established for the case of equal infinite defect numbers by S.N. Saakyan [679]. In fact, the self-adjoint extensions were allowed to be in a Hilbert space which contains the original Hilbert space as a closed subspace. This type of extension appeared after 1940 in papers by M.G. Kre˘ın and M.A. Na˘ımark [605, 606, 607]. Later A.V. Straus in the 1950s and 1960s described ˇ such exit space extensions in the framework of the von Neumann formulas via holomorphic contractions between the defect spaces [731]. The Q-function in Kre˘ın's formula can be seen as an abstract analog of the Titchmarsh–Weyl function in the above Sturm–Liouville example; it was extensively studied in the 1960s and 1970s by M.G. Kre˘ın and H. Langer [497]–[504], also in the context of Pontryagin spaces.

From the early 1940s on E.C. Titchmarsh turned his attention to the singular Sturm–Liouville equation. He put aside Weyl's method of handling the Sturm– Liouville problem on the basis of integral equations and also bypassed the use of the general theory of linear operators in Hilbert spaces as in Stone's book [739]. Instead, Titchmarsh used contour integration and the Cauchy calculus of residues, influenced by the work of E. Hilb [417, 418, 419], a contemporary of Weyl. In this way he found a simple formula to determine the spectral measure; this last formula was also discovered by K. Kodaira around the same time [469, 470]. A complete survey of the work of Titchmarsh, both for ordinary and partial differential operators, is given in his two books on eigenfunction expansions [740, 741]. A different approach, followed by B.M. Levitan [541, 542], N. Levinson [539, 540], and K. Yosida [780, 781], is based on the fact that the resolvent operator of the self-adjoint realization of a singular differential operator can be approximated by compact resolvents corresponding to Sturm–Liouville problems for proper closed subintervals. Closely connected with this is an abstract approach to eigenfunction expansions generated by differential operators that was introduced by Kre˘ın [495] in the form of directing functionals.

Influenced by questions from mathematical physics, von Neumann posed the following problem in the middle of the 1930s: can one extend a densely defined semibounded symmetric operator to a self-adjoint operator with the same lower bound? There were contributions by M.H. Stone [724] and K.O. Friedrichs [310] (whose work was simplified by H. Freudenthal [309]). The Friedrichs extension was the solution to von Neumann's problem. For Sturm–Liouville operators the Friedrichs extension was determined in various cases by K.O. Friedrichs [311] in 1935 and by F. Rellich [654] in 1950. Another semibounded extension, the so-called Kre˘ın–von Neumann extension (going back to Stone) has particularly interesting properties. It was Kre˘ın [493, 494] who established a complete theory of semibounded extensions. In the middle of the 1950s this circle of ideas was carried forward, and it inspired contributions by M.S. Birman [139], and also M.I. Vishik [747], who was particularly interested in the case of elliptic partial differential operators. Building on the work of J.L. Lions and E. Magenes [544] on Sobolev spaces and trace mappings G. Grubb [352, 353] gave a characterization of all closed extensions of a minimal elliptic operator by nonlocal boundary conditions in her 1966 Stanford doctoral dissertation, written under the direction of R.S. Phillips.

The context of symmetric operators which are densely defined was soon felt to be too restrictive. Already in 1949 M.A. Krasnoselski˘ı [490] described all selfadjoint operator extensions of a not necessarily densely defined symmetric operator. The appearance of the work on linear relations by R. Arens [42] in 1961 made all the difference. B.C. Orcutt [619] in a 1969 dissertation written under the direction of J. Rovnyak treated the spectral theory of canonical systems of differential equations in terms of linear relations. Subsequently, E.A. Coddington [202] in 1973 gave a description of all self-adjoint relation extensions of a symmetric relation. In fact, it turned out that many of the earlier results concerning extensions of symmetric operators could be put in the framework of relations. The new context made it also possible to consider nonstandard boundary conditions (involving integrals, for instance). Furthermore, in terms of relations the Kre˘ın– von Neumann extension of a semibounded relation could be simply expressed in terms of the Friedrichs extension. There has been an abundance of papers devoted to linear relations in Hilbert spaces, and later also to linear relations in indefinite inner product spaces.

In the middle of the 1970s boundary triplets were introduced independently by V.M. Bruk [176] and A.N. Kochubei [466] as a convenient tool for the description of boundary values of abstract Hilbert space operators; they applied them to, e.g., Sturm–Liouville operators with an operator-valued potential. The main feature is that under a given boundary triplet there is a natural correspondence

between self-adjoint extensions of a symmetric operator and self-adjoint relations in the parameter space. An overview of the theory with applications to differential operators is contained in the 1984 book by M.L. Gorbachuk and V.I. Gorbachuk [346]. Around the same time V.A. Derkach and M.M. Malamud [244, 246] continued the work on boundary triplets by associating the notion of Weyl function to a boundary triplet; their later work was written in the context of symmetric operators that are not necessarily densely defined. The Weyl function is a very useful tool in spectral analysis; it turns out to be a special choice of a Q-function (which is uniquely determined by the boundary triplet) and hence the analytic properties and the limit behavior of the Weyl function towards the real line reflect the spectral properties of the self-adjoint extensions. Broadly speaking, boundary triplets and Weyl functions placed the work of Titchmarsh, and others, in a more abstract setting while retaining the flavor of concrete boundary value problems. The link to form methods and the Birman–Kre˘ın–Vishik approach to semibounded self-adjoint extensions is made with the help of so-called boundary pairs. The origin of the concept of boundary pair lies in the work of Kre˘ın and Vishik; it was formalized and studied by V.E. Lyantse and O.G. Storozh [552] in the early 1980s. Its connection with boundary triplets was later established by Yu.M. Arlinski˘ı [44].

It is the main objective of this monograph to present the theory of boundary triplets and Weyl functions in an easily accessible and self-contained manner. The exposition is detailed and kept as simple as possible; the reader is only assumed to be familiar with the basic principles of functional analysis and some fundamentals of the spectral theory of self-adjoint operators in Hilbert spaces. The monograph is divided into the abstract part Chapters 1–5, the applied part Chapters 6–8, and Appendices A–D. The heart of the monograph is Chapter 2 and it is complemented by Chapter 5; for a rough idea on the general techniques the reader may first look through these chapters and examine one of the applications (which may also be read independently) afterwards: Sturm–Liouville operators, canonical systems, or Schr¨odinger operators – up to personal taste and preferences.

The monograph opens in Chapter 1 with a detailed introduction to the theory of linear operators and relations in Hilbert spaces. A large part of this material is preparatory and may be used for reference purposes in the rest of the text.

The heart of the matter in this book is contained in Chapter 2, where boundary value problems are presented as extension problems of symmetric operators or relations. Here the notions of boundary triplets and their Weyl functions are introduced, and the fundamental properties of these objects are provided. Particular attention is paid to the question of existence and uniqueness of boundary triplets. Closely connected with a boundary triplet is Kre˘ın's resolvent formula for canonical extensions and self-adjoint extensions in larger Hilbert spaces.

Chapter 3 is a continuation and further refinement of the techniques in the previous chapter. Here the main objective is to give a detailed description of the complete spectrum of the self-adjoint extensions of a symmetric relation in terms of the Weyl function. The connection between the limit properties of the Weyl function and the spectrum of the self-adjoint extension is explained via the Borel transform of the spectral measure.

Most of the topics in Chapter 4 are supplementary to the main text as they are concerned with a certain type of inverse problem. More precisely, it will be shown that any (uniformly strict) operator-valued Nevanlinna function can be realized as the Weyl function corresponding to a boundary triplet for a symmetric relation in a reproducing kernel Hilbert model space. Of independent interest is the discussion around the orthogonal coupling of boundary triplets with a view to exit space extensions.

Another central theme in this monograph is presented in Chapter 5, where the important case of semibounded symmetric relations is treated in more detail; here the general methods from Chapter 2 are further developed. The chapter starts with an introduction to closed semibounded forms and the corresponding representation theorems, and continues with the Friedrichs extension, the so-called Kre˘ın type extensions, and the Kre˘ın–von Neumann extension. The ultimate result is a description of the semibounded self-adjoint extensions of a semibounded relation via the notions of a boundary triplet and a boundary pair; this establishes the connection with the Kre˘ın–Birman–Vishik theory.

The general theory is applied to boundary value problems for differential operators in Chapters 6–8 in three different situations. In each case the presentation follows a similar scheme: After the necessary preparations to keep these chapters mostly self-contained, explicit boundary triplets and Weyl functions for the particular operators or relations under consideration, are provided. A further spectral analysis, depending on the nature of problem is presented. The class of Sturm–Liouville operators that is discussed in Chapter 6 covers also the example given earlier in this introduction. A good deal of preparation is needed to construct closed semibounded forms and corresponding boundary pairs in the singular situation. Chapter 7 deals with 2 × 2 canonical systems of differential equations and also illustrates the role of linear relations in the analysis of such systems. Finally, in Chapter <sup>8</sup> Schr¨odinger operators on bounded domains Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> are treated, where one of the main challenges is to construct Dirichlet and Neumann traces on the maximal domain.

For the reader's convenience a number of appendices have been added: they contain material concerning Nevanlinna functions and some useful elementary observations on operators and subspaces in Hilbert spaces. At the end of the text a few notes and some (historical) comments, as well as a list of recent and earlier references, can be found. Here the reader is also referred to some recent literature for topics that go beyond this monograph.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 1 Linear Relations in Hilbert Spaces**

A linear relation from one Hilbert space to another Hilbert space is a linear subspace of the product of these spaces. In this chapter some material about such linear relations is presented and it is shown how linear operators, whether densely defined or not, fit in this context. The basic terminology is provided in Section 1.1 and afterwards the spectrum, resolvent set, the adjoint, and operator decompositions of linear relations are discussed in Section 1.2 and Section 1.3. Linear relations with special properties, such as symmetric, self-adjoint, dissipative, and accumulative relations, are investigated in Sections 1.4, 1.5, and 1.6. More details on self-adjoint and semibounded relations can be found in Chapter 3 and Chapter 5. Intermediate extensions and the classical von Neumann formulas describing self-adjoint extensions of symmetric operators and relations can be found in Section 1.7. In Section 1.8 it is shown that there is a natural indefinite inner product by means of which the notion of adjoint relation corresponds to the notion of orthogonal companion. Strong graph convergence and strong resolvent convergence of sequences of linear relations are discussed in Section 1.9 and parametric representations of linear relations are studied in Section 1.10. Finally, in Section 1.11 some useful properties of a resolvent-type operator of a linear relation are given, and in Section 1.12 the class of so-called Nevanlinna families, a natural extension of the class of Nevanlinna functions (see Appendix A) is studied.

## **1.1 Elementary facts about linear relations**

Let H and K be Hilbert spaces over C. The Hilbert space inner product and the corresponding norm are usually denoted by (·, ·) and · , respectively, and sometimes a subindex will be used in order to avoid confusion. The inner product is linear in the first entry and antilinear in the second entry. The orthogonal complement will be denoted by ⊥, sometimes a subindex will be used to indicate the relevant space. The product H × K will often be regarded as a Hilbert space with the standard inner product (·, ·)<sup>H</sup> + (·, ·)<sup>K</sup> and all topological notions in H×K are understood with respect to the topology induced by the corresponding norm. The product space H × K will also be written as H ⊕ K, and H and K are then regarded as closed linear subspaces in H ⊕ K which are orthogonal to each other.

A linear subspace of H × K is called a linear relation from H to K. If H is a linear relation from <sup>H</sup> to <sup>K</sup> the elements h ∈ H will in general be written as pairs {h, h- } with components h ∈ H and h- ∈ K. If K = H one speaks simply of a linear relation in H. After this introductory section the adjective linear is usually omitted and one speaks of relations when linear relations are meant.

The domain, range, kernel, and multivalued part of a linear relation H from H to K are defined by

$$\begin{aligned} \text{dom}\,H &= \{h \in \mathfrak{H} : \{h, h'\} \in H \text{ for some } h' \in \mathfrak{K}\}, \\ \text{ran}\,H &= \{h' \in \mathfrak{K} : \{h, h'\} \in H \text{ for some } h \in \mathfrak{H}\}, \\ \text{ker}\,H &= \{h \in \mathfrak{H} : \{h, 0\} \in H\}, \\ \text{null}\,H &= \{h' \in \mathfrak{K} : \{0, h'\} \in H\}, \end{aligned}$$

respectively. The closure of the linear space dom H will be denoted by dom H and, likewise, the closure of the linear space ran H will be denoted by ran H. Note that each linear operator H from H to K is a linear relation if the operator is identified with its graph,

$$H = \{ \{ h, Hh \} \, : \, h \in \text{dom}\, H \},$$

and that a linear relation H is (the graph of) an operator if and only if the multivalued part of <sup>H</sup> is trivial, mul <sup>H</sup> <sup>=</sup> {0}. The inverse <sup>H</sup>−<sup>1</sup> of a linear relation H from H to K is defined by

$$H^{-1} = \left\{ \{h', h\} : \{h, h'\} \in H \right\},$$

so that H−<sup>1</sup> is a linear relation from K to H. In the next lemma some obvious facts concerning the inverse relation are collected.

**Lemma 1.1.1.** Let H be a linear relation from H to K. Then the following identities hold:

$$\begin{aligned} \text{dom}\,H^{-1} &= \text{ran}\,H, & \text{ran}\,H^{-1} &= \text{dom}\,H,\\ \text{ker}\,H^{-1} &= \text{null}\,H, & \text{null}\,H^{-1} &= \text{ker}\,H.\end{aligned}$$

There is a linear structure on the collection of linear relations from H to K. For linear relations H and K from H to K the componentwise sum is the linear relation from H to K defined by

$$H \stackrel{\frown}{+} K = \left\{ \{h+k, h'+k'\} : \{h, h'\} \in H, \{k, k'\} \in K \right\},\tag{1.1.1}$$

while the product λH of <sup>H</sup> with a scalar <sup>λ</sup> <sup>∈</sup> <sup>C</sup> is the linear relation from <sup>H</sup> to <sup>K</sup> defined by

$$
\lambda H = \left\{ \{h, \lambda h'\} : \{h, h'\} \in H \right\}.
$$

Note that the componentwise sum <sup>H</sup> <sup>+</sup> <sup>K</sup> is the linear span of the graphs of <sup>H</sup> and K, and

$$\dim(H\widehat{+}K) = \dim H + \dim K, \quad \text{ran}(H\widehat{+}K) = \text{ran}H + \text{ran}K.$$

Likewise, if <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, one has

dom λH = dom H and for λ = 0 ran λH = ran H.

Note that by definition 0 H = Odom <sup>H</sup>, where Odom <sup>H</sup> stands for the zero operator on dom H. It is useful to note that

$$(H \stackrel\frown{+} K)^{-1} = H^{-1} \stackrel\frown{+} K^{-1}, \qquad (\lambda H)^{-1} = \frac{1}{\lambda} H^{-1}, \quad \lambda \neq 0.$$

Let H and K be linear relations from H to K. If H ⊂ K, then H is called a restriction of K and K is an extension of H.

**Proposition 1.1.2.** Let H and K be linear relations from H to K and assume that H ⊂ K. Then

$$
\dim H = \dim K \quad \Leftrightarrow \quad K = H \stackrel{\frown}{+} \{ \{0\} \times \text{mult} \, K\}, \tag{1.1.2}
$$

and, analogously,

$$
\tan H = \tan K \quad \Leftrightarrow \quad K = H \stackrel{\frown}{+} \begin{pmatrix} \ker K \ \widehat{+} \{0\} \end{pmatrix}. \tag{1.1.3}
$$

Proof. Note that <sup>H</sup> <sup>⊂</sup> <sup>K</sup> is equivalent to <sup>H</sup>−<sup>1</sup> <sup>⊂</sup> <sup>K</sup>−1. Hence, in order to prove (1.1.3) one just applies (1.1.2) with H and K replaced by H−<sup>1</sup> and K−1, respectively. Thus it suffices to show (1.1.2). The implication (⇐) is trivial. To show (⇒), observe that <sup>H</sup> <sup>⊂</sup> <sup>K</sup> yields <sup>H</sup> + ( {0} × mul <sup>K</sup>) <sup>⊂</sup> <sup>K</sup> and hence it suffices to show that <sup>K</sup> <sup>⊂</sup> <sup>H</sup> + ( {0} × mul <sup>K</sup>). Let {h, h- } ∈ K. Since h ∈ dom K = dom H, there exists an element k- ∈ K such that {h, k- } ∈ H and from H ⊂ K it follows that also {h, k- } ∈ K. Hence, with ϕ- = h- − kone has

$$\{h, h'\} = \{h, k'\} + \{0, \varphi'\},$$

and thus {0, ϕ- } ∈ K, i.e., ϕ-<sup>∈</sup> mul <sup>K</sup>. -

**Corollary 1.1.3.** Let H and K be linear relations from H to K and assume that H ⊂ K. Then

$$
\dim H = \dim K \quad \text{and} \quad \text{mult} \, H = \text{mult} \, K \quad \Leftrightarrow \quad H = K,\tag{1.1.4}
$$

and, analogously,

$$
\tan H = \tan K \quad \text{and} \quad \ker H = \ker K \quad \Leftrightarrow \quad H = K. \tag{1.1.5}
$$

Proof. It suffices to show (1.1.4), as (1.1.5) follows by taking inverses in (1.1.4). Clearly, the implication (⇐) is trivial. For the implication (⇒) apply (1.1.2). Then dom H = dom K and mul H = mul K give successively

$$K = H \stackrel{\frown}{+} \left( \{ 0 \} \times \text{mul } K \right) = H \stackrel{\frown}{+} \left( \{ 0 \} \times \text{mul } H \right) \subset H,$$

which together with <sup>H</sup> <sup>⊂</sup> <sup>K</sup> implies <sup>H</sup> <sup>=</sup> <sup>K</sup>. -

Let H and K be linear relations from H to K. The usual (operatorwise) sum H + K is defined by

$$H + K = \left\{ \{h, h' + h''\} : \{h, h'\} \in H, \{h, h''\} \in K \right\},$$

where dom (H + K) = dom H ∩dom K. Note that mul (H + K) = mul H + mul K. If <sup>H</sup> is a linear relation in <sup>H</sup>, then for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> the sum <sup>H</sup> <sup>+</sup> λI, where <sup>I</sup> denotes the identity operator in H, is usually simply written as H + λ and has the form

$$H + \lambda = \left\{ \{h, h' + \lambda h\} \, : \, \{h, h'\} \in H \right\},$$

with dom (H + λ) = dom H. Note that mul (H + λ) = mul H.

Let H be a linear relation from H to K and let K be a linear relation from K to G, where G is another Hilbert space. Then the product KH of K and H is the linear relation from H to G defined by

$$KH = \left\{ \{h, h''\} : \{h, h'\} \in H, \{h', h''\} \in K \right\}.$$

Note that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> the notation λH agrees with (λI)H, where <sup>I</sup> denotes the identity operator in K. It is straightforward to check that (KH)−<sup>1</sup> = H−1K−1.

The following lemma shows an important feature of sums and products of linear relations. The notation I<sup>M</sup> stands for the identity operator on the linear subspace M, while O<sup>M</sup> stands for the zero operator on M.

**Lemma 1.1.4.** Let H be a linear relation from H to K. Then

$$H + (-H) = O\_{\text{dom}\,H} \hat{+} \left( \{ 0 \} \times \text{mul}\,H \right),\tag{1.1.6}$$

where the sum is direct. Moreover, the identities

$$HH^{-1} = I\_{\text{ran}\,H} \hat{+} \left( \{0\} \times \text{mul}\,H \right) \tag{1.1.7}$$

and

$$H^{-1}H = I\_{\text{dom}\,H} \hat{+} \left( \{ 0 \} \times \ker H \right) \tag{1.1.8}$$

hold, and both sums are direct.

Proof. First the identity (1.1.6) will be shown. For an element on the left-hand side of (1.1.6) one has

$$\{h, h' - h''\} = \{h, 0\} + \{0, h' - h''\},$$

where {h, h- }, {h, h--} ∈ H, so that {h, 0} ∈ Odom <sup>H</sup> and {0, h- −h--}∈{0}×mul H.

Conversely, let {h, k} ∈ <sup>O</sup>dom <sup>H</sup> + ( {0}×mul <sup>H</sup>). Then {h, k} <sup>=</sup> {h, <sup>0</sup>}+{0, k} with h ∈ dom H and k ∈ mul H. Hence, {h, h- } ∈ H for some h- ∈ K so that also {h, h-− k} ∈ H. Consequently,

$$\{h, k\} = \{h, h' - (h' - k)\} \in H + (-H),$$

which completes the proof of (1.1.6).

The assertion (1.1.8) follows from (1.1.7) by replacing H with H−<sup>1</sup>. Hence, only the identity in (1.1.7) has to be proved. By definition, the linear relation HH−<sup>1</sup> is given by

$$HH^{-1} = \left\{ \{h, h''\} : \{h, h'\} \in H^{-1}, \,\{h', h''\} \in H \right\}.$$

Therefore, if {h, h--} ∈ HH−<sup>1</sup> with some {h, h- } ∈ <sup>H</sup>−<sup>1</sup> and {h- , h--} ∈ H, then

$$\{h, h''\} = \{h, h\} + \{0, h'' - h\}.$$

As {h- , h} ∈ H, it follows that h ∈ ran H and

$$\{0, h'' - h\} = \{h', h''\} - \{h', h\} \in H,$$

i.e., h-- − h ∈ mul H. Thus, {h, h--} ∈ <sup>I</sup>ran <sup>H</sup> + ( {0} × mul <sup>H</sup>).

Conversely, given an element {h, h} <sup>+</sup> {0, k} ∈ <sup>I</sup>ran <sup>H</sup> + ( {0} × mul <sup>H</sup>) with h ∈ ran H and k ∈ mul H, there exists h- ∈ dom H such that {h- , h} ∈ H or, equivalently, {h, h- } ∈ <sup>H</sup>−1. Since {0, k} ∈ <sup>H</sup> it follows {h- , h + k} ∈ H, so that {h, h <sup>+</sup> <sup>k</sup>} ∈ HH−1. -

Thus far the Hilbert space structure of the spaces has not been used; only the linear space structure played a role. Now an interpretation of the componentwise sum <sup>H</sup> <sup>+</sup> <sup>K</sup> in (1.1.1) will be given as an orthogonal componentwise sum. Let <sup>H</sup>1, H2, K1, and K<sup>2</sup> be Hilbert spaces and let H = H<sup>1</sup> ⊕ H<sup>2</sup> and K = K<sup>1</sup> ⊕ K2. Here and in the following H<sup>1</sup> and H<sup>2</sup> are viewed as closed linear subspaces of H, and K<sup>1</sup> and K<sup>2</sup> are viewed as closed linear subspaces of K. Assume that H is a linear relation from H<sup>1</sup> to K<sup>1</sup> and that K is a linear relation from H<sup>2</sup> to K2. The orthogonal sum <sup>H</sup> <sup>⊕</sup> <sup>K</sup> is defined as

$$H \oplus K = \left\{ \{h+k, h'+k'\} : \{h, h'\} \in H, \,\{k, k'\} \in K \right\}.$$

In other words, <sup>H</sup> <sup>⊕</sup> <sup>K</sup> is just the componentwise sum <sup>H</sup> <sup>+</sup> <sup>K</sup> of <sup>H</sup> and <sup>K</sup>, when these linear relations H and K are interpreted as linear relations from H = H1⊕H<sup>2</sup> to K = K<sup>1</sup> ⊕ K2. If H = K and H<sup>1</sup> = K1, H<sup>2</sup> = K2, then this definition implies

$$(H \oplus K)^2 = H^2 \oplus K^2. \tag{1.1.9}$$

A linear relation H from H to K is called bounded if there is a constant C ≥ 0 such that h- <sup>K</sup> ≤ C h <sup>H</sup> for all {h, h- } ∈ H. In this case it is clear that mul H = {0}, so that H is a bounded operator. Thus, there is no distinction between bounded linear relations or bounded linear operators. The set of everywhere defined bounded linear operators from H to K will be denoted by **B**(H, K). If H = K the notation **B**(H) is used instead of **B**(H, H).

A linear relation from H to K is called closed if it is closed as a linear subspace of H × K. The closure H of the linear relation H as a linear subspace of H × K is itself a closed linear relation. It follows that mul H ⊂ mul H; if mul H = {0} implies that mul H = {0}, then the operator H is called closable (as an operator). The following useful observations are easily verified.

**Lemma 1.1.5.** Let H be a linear operator from H to K. Then the following statements hold:


A linear relation H from H to K is called contractive if h- K ≤ h <sup>H</sup> for all {h, h- } ∈ H and it is called isometric if h- <sup>K</sup> = h <sup>H</sup> for all {h, h- } ∈ H. In each case mul H = {0} and H is an operator which is bounded and thus closable; cf. Lemma 1.1.5. Hence, there is no distinction between contractive relations or operators. Likewise, there is no distinction between isometric relations or operators. Clearly, the closure of a contractive or isometric operator is again contractive or isometric. Recall that a contraction H has the following useful property: if Hk <sup>K</sup> = k <sup>H</sup> for some k ∈ dom H, then

$$(Hh, Hk)\_{\mathfrak{K}} = (h, k)\_{\mathfrak{K}} \quad \text{for all} \quad h \in \text{dom}\, H. \tag{1.1.10}$$

To see this, note that for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>

$$\begin{aligned} 0 &\le \|h + \lambda k\|\_{\mathfrak{H}}^2 - \|H(h + \lambda k)\|\_{\mathfrak{A}}^2 \\ &= \|h\|\_{\mathfrak{H}}^2 - \|Hh\|\_{\mathfrak{A}}^2 - 2\text{Re}\left(\overline{\lambda}\left[ (Hh, Hk)\_{\mathfrak{A}} - (h, k)\_{\mathfrak{H}} \right] \right), \end{aligned}$$

which implies that (1.1.10) holds.

For many combinations of linear relations the closedness is preserved. For instance, if H is a closed linear relation from H to K, then H−<sup>1</sup> is a closed linear relation from K to H. Likewise, for λ = 0 the product λH is closed. If H and K are closed linear relations from <sup>H</sup> to <sup>K</sup>, then the componentwise sum <sup>H</sup> <sup>+</sup> <sup>K</sup> is not necessarily closed (see Appendix C), while the orthogonal componentwise sum <sup>H</sup> <sup>⊕</sup> <sup>K</sup> of <sup>H</sup> and <sup>K</sup> is closed. The sum <sup>H</sup> <sup>+</sup> <sup>K</sup> of two closed linear relations <sup>H</sup> and K is not necessarily closed. However, in the special case that H is closed and K ∈ **B**(H, K) the sum

$$H + K = \left\{ \{h, h' + Kh\} : \{h, h'\} \in H \right\}.$$

is also closed. In particular, the linear relation H in H is closed if and only if H +λ is closed for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. The product KH of closed linear relations K and H is not necessarily closed. However, in the special case that K is closed and H ∈ **B**(H, K) the product

$$KH = \left\{ \{h, h''\} : \{Hh, h''\} \in K \right\}.$$

is also closed.

The above material will be used throughout the text. The rest of this section will be devoted to two specific items, namely, a discussion of questions around the so-called resolvent identity, and one involving M¨obius transformations of linear relations.

For a linear relation <sup>H</sup> in <sup>H</sup> and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, the resolvent relation is defined by (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1. Clearly, <sup>H</sup> is closed if and only if (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is closed for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. The resolvent relation has a number of properties which will now be explored. First the <sup>λ</sup>-independence of ker (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and mul (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) is stated.

**Lemma 1.1.6.** Let <sup>H</sup> be a linear relation in <sup>H</sup> and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Then

$$\ker\left(H - \lambda\right)^{-1} = \text{mul}\left(H - \lambda\right) = \text{mul}\,H.$$

For practical purposes it is worthwhile mentioning the analogs of (1.1.7) and (1.1.8) for the resolvent relation of H. Using Lemma 1.1.6 one sees that

$$(H - \lambda)(H - \lambda)^{-1} = I\_{\text{ran}\,(H - \lambda)} \hat{+} \left( \{0\} \times \text{null}\, H \right),$$

and, likewise,

$$(H - \lambda)^{-1}(H - \lambda) = I\_{\text{dom }H} \hat{+} \left( \{0\} \times \ker \left( H - \lambda \right) \right).$$

In particular, when ker (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) = {0} for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, one has

$$(H - \lambda)^{-1}(H - \lambda) = I\_{\text{dom }H \cdot \lambda}$$

The resolvent identity in the next proposition involves a combination of the sum and the product of the resolvent relations (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and (<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−1.

**Proposition 1.1.7.** Let <sup>H</sup> be a linear relation in <sup>H</sup> and let λ, μ <sup>∈</sup> <sup>C</sup>. Then

$$(H - \lambda)^{-1} - (H - \mu)^{-1} = (H - \lambda)^{-1}(\lambda - \mu)(H - \mu)^{-1}.\tag{1.1.11}$$

If ker (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) = {0} and ker (<sup>H</sup> <sup>−</sup> <sup>μ</sup>) = {0}, then (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and (<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup> are linear operators defined on ran (H −λ) and ran (H −μ), respectively, with the same kernel mul H. Moreover, if λ = μ, then (1.1.11) may be written as

$$(H - \lambda)^{-1} - (H - \mu)^{-1} = (\lambda - \mu)(H - \lambda)^{-1}(H - \mu)^{-1}.\tag{1.1.12}$$

Proof. For the inclusion (⊂) in (1.1.11) let

$$\{h, h' - h''\} \in \left(H - \lambda\right)^{-1} - \left(H - \mu\right)^{-1},$$

with {h, h- } ∈ (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and {h, h--} ∈ (<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup>. This gives

$$\{h', h + \lambda h'\} \in H \quad \text{and} \quad \{h'', h + \mu h''\} \in H,$$

which shows {h- − h--, λh- − μh--} ∈ H, and thus {h- − h--,(λ − μ)h--} ∈ H − λ and

$$\left\{ (\lambda - \mu)h'', h' - h'' \right\} \in (H - \lambda)^{-1}.$$

Since {h, h--} ∈ (<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−1, one sees that {h,(<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)h--} ∈ (<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−1, as {h--,(λ − μ)h--} ∈ (λ − μ)I. Hence, the element {h, h- − h--} belongs to the linear relation (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1(<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−1, which shows the inclusion.

For the inclusion (⊃) in (1.1.11), let {h, h- } ∈ (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1(<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−1. Then by definition there exists k ∈ H such that

$$\{h, k\} \in \left(H - \mu\right)^{-1} \quad \text{and} \quad \{\left(\lambda - \mu\right)k, h'\} \in \left(H - \lambda\right)^{-1},$$

as {k,(λ − μ)k} ∈ (λ − μ)I. In addition, it is clear from {k, h} ∈ H − μ that {k, h + (μ − λ)k} ∈ H − λ and

$$\{h + (\mu - \lambda)k, k\} \in (H - \lambda)^{-1}.$$

Thus, it follows that {h, h- <sup>+</sup> <sup>k</sup>} ∈ (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1. Hence, {h, h- } = {h, h- + k − k} belongs to (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>−</sup> (<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−1, which shows the inclusion. This completes the proof of (1.1.11). If λ = μ this leads to (1.1.12).

The remaining statements follow directly from Lemma 1.1.6. -

Note that in general the identity in (1.1.12) is not valid for λ = μ. In this case the right-hand side of (1.1.12) clearly equals Odom (H−λ)−<sup>2</sup> , while by (1.1.6) the left-hand side equals <sup>O</sup>dom (H−λ)−<sup>1</sup> + ( {0}×mul (<sup>H</sup> <sup>−</sup>λ)−1). Hence, in (1.1.12) the right-hand side is contained in the left-hand side.

The following result shows that every linear relation H can be represented by means of a pair of operators expressed in terms of its resolvent operator (<sup>H</sup> <sup>−</sup>λ)−1. This kind of representation of a linear relation will be considered in this text in various situations.

**Lemma 1.1.8.** Let H be a linear relation in H and assume that ker (H − λ) = {0} for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Then

$$H = \left\{ \left\{ (H - \lambda)^{-1} k, (I + \lambda(H - \lambda)^{-1})k \right\} : k \in \text{ran} \left( H - \lambda \right) \right\},\tag{1.1.13}$$

where the right-hand side is well defined since dom (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> = ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>).

$$\square$$

Proof. Denote the linear relation on the right-hand side of (1.1.13) by K. To see that H ⊂ K, let {h, h- } ∈ H. Then {h- <sup>−</sup> λh, h} ∈ (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and from the assumption mul (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> = ker (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) = {0} it follows that

$$h = (H - \lambda)^{-1}(h' - \lambda h).$$

Therefore,

$$\begin{aligned} \{h, h'\} &= \{h, h' - \lambda h + \lambda h\} \\ &= \{ (H - \lambda)^{-1} (h' - \lambda h), (I + \lambda (H - \lambda)^{-1}) (h' - \lambda h) \}, \end{aligned}$$

where h-−λh ∈ ran (H −λ). Hence, {h, h- } ∈ K, so that H ⊂ K. Now the equality follows from Corollary 1.1.3, since

$$\dim K = \text{ran}\,(H - \lambda)^{-1} = \text{dom}\,(H - \lambda) = \text{dom}\,H,$$

while

$$\operatorname{mult} K = \ker \left( H - \lambda \right)^{-1} = \operatorname{mult} \left( H - \lambda \right) = \operatorname{mult} H.$$

This completes the proof. -

Another algebraic identity involving the resolvent relations (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and (<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup> is contained in the next lemma; see also Corollary 1.2.8 in the next section. The formula in the lemma can also be checked via the M¨obius transform to be defined below.

**Lemma 1.1.9.** Let <sup>H</sup> be a linear relation in <sup>H</sup> and let λ, μ <sup>∈</sup> <sup>C</sup>. Then

$$\left( \left( I + (\lambda - \mu)(H - \lambda)^{-1} \right)^{-1} = I + (\mu - \lambda)(H - \mu)^{-1}. \tag{1.1.14}$$

Proof. It is easy to see that

$$I + (\lambda - \mu)(H - \lambda)^{-1} = \left\{ \{h' - \lambda h, h' - \mu h\} : \{h, h'\} \in H \right\},$$

and by symmetry

$$I + (\mu - \lambda)(H - \mu)^{-1} = \left\{ \{h' - \mu h, h' - \lambda h\} : \{h, h'\} \in H \right\}.$$

This yields (1.1.14). -

Next, M¨obius transformations of linear relations will be defined. For a Hilbert space H and a 2 × 2 matrix

$$\mathcal{M} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}, \qquad \alpha, \beta, \gamma, \delta \in \mathbb{C}, \tag{1.1.15}$$

the scalar M¨obius transform <sup>M</sup> in <sup>H</sup><sup>2</sup> <sup>=</sup> <sup>H</sup> <sup>×</sup> <sup>H</sup> is given by

$$\{\mathcal{M}: \mathfrak{H}^2 \to \mathfrak{H}^2, \qquad \{h, h'\} \mapsto \{\alpha h + \beta h', \gamma h + \delta h'\}.$$

The meaning of M, either as a matrix or as a transformation, will be clear from the context. The scalar M¨obius transform of a linear relation is defined as follows.

**Definition 1.1.10.** Let H be a linear relation in H and let M be a 2 × 2 matrix as in (1.1.15). Then the scalar M¨obius transform of H is the linear relation M[H] in H defined by

$$\mathcal{M}[H] = \left\{ \{\alpha h + \beta h', \gamma h + \delta h'\} : \{h, h'\} \in H \right\}.\tag{1.1.16}$$

Note that the domain and range of the scalar M¨obius transform M[H] are given by

$$\begin{aligned} \text{dom}\,\mathcal{M}[H] &= \left\{ \alpha h + \beta h' : \left\{ h, h' \right\} \in H \right\}, \\ \text{ran}\,\mathcal{M}[H] &= \left\{ \gamma h + \delta h' : \left\{ h, h' \right\} \in H \right\}. \end{aligned}$$

If the 2 <sup>×</sup> 2 matrix <sup>M</sup> in Definition 1.1.10 is multiplied by a constant <sup>η</sup> <sup>∈</sup> <sup>C</sup> \ {0}, then the corresponding M¨obius transform M[H] and (ηM)[H] coincide.

Let M and N be 2 × 2 matrices. Then the identity

$$\mathcal{N}[\mathcal{M}[H]] = (\mathcal{N} \circ \mathcal{M})[H] \tag{1.1.17}$$

holds for any linear relation H in H. If det M = 0, then

$$\mathcal{M}^{-1} = \frac{1}{\alpha \delta - \beta \gamma} \begin{pmatrix} \delta & -\beta \\ -\gamma & \alpha \end{pmatrix}$$

and the M¨obius transform corresponding to M−<sup>1</sup> is given by

$$\mathcal{M}^{-1}[H] = \left\{ \{\delta h - \beta h', -\gamma h + \alpha h'\} \, : \, \{h, h'\} \in H \right\}.$$

Thus, for any linear relation H one has

$$
\mathcal{M}^{-1}[\mathcal{M}[H]] = H = \mathcal{M}[\mathcal{M}^{-1}[H]];
$$

cf. (1.1.17). Note that in general M−1[H] and M[H] <sup>−</sup><sup>1</sup> are different relations. In the case det M = 0 it clearly follows that

$$\mathcal{M}[H] \text{ is closed if and only if } H \text{ is closed.} \tag{1.1.18}$$

Observe that the linear relations λH, <sup>H</sup> <sup>−</sup> <sup>λ</sup>, <sup>H</sup>−<sup>1</sup> correspond to the M¨obius transforms determined by the following matrices

$$
\begin{pmatrix} 1 & 0 \\ 0 & \lambda \end{pmatrix}, \quad \begin{pmatrix} 1 & 0 \\ -\lambda & 1 \end{pmatrix}, \quad \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},
$$

respectively. Thus, for instance, the linear relations <sup>I</sup> + (<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)(<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and <sup>I</sup> + (<sup>μ</sup> <sup>−</sup> <sup>λ</sup>)(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup> correspond to M¨obius transforms of <sup>H</sup> determined by the matrices

$$
\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & \lambda - \mu \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ -\lambda & 1 \end{pmatrix} = \begin{pmatrix} -\lambda & 1 \\ -\mu & 1 \end{pmatrix}
$$

and

$$
\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & \mu - \lambda \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ -\mu & 1 \end{pmatrix} = \begin{pmatrix} -\mu & 1 \\ -\lambda & 1 \end{pmatrix},
$$

respectively. This also confirms the identity (1.1.14).

For a 2 × 2 matrix M as in (1.1.15) with det M = 0 define the function

$$
\lambda \mapsto \mathcal{M}[\lambda] = \frac{\gamma + \lambda \delta}{\alpha + \lambda \beta}, \qquad \alpha + \lambda \beta \neq 0. \tag{1.1.19}
$$

Since the linear relation M[H] − M[λ] corresponds to the matrix

$$
\begin{pmatrix} 1 & 0 \\ -\mathcal{M}[\lambda] & 1 \end{pmatrix} \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \frac{-\lambda \det \mathcal{M}}{\alpha + \lambda \beta} & \frac{\det \mathcal{M}}{\alpha + \lambda \beta} \end{pmatrix},
$$

one sees from (1.1.16) that for α + λβ = 0,

$$\mathcal{M}[H] - \mathcal{M}[\lambda] = \left\{ \left\{ \alpha h + \beta h', \frac{\det \mathcal{M}}{\alpha + \lambda \beta} (h' - \lambda h) \right\} : \left\{ h, h' \right\} \in H \right\}.$$

This identity yields, in particular, for α + βλ = 0, that

$$\begin{aligned} \ker\left(H - \lambda\right) &= \ker\left(\mathcal{M}[H] - \mathcal{M}[\lambda]\right), \\ \operatorname{ran}\left(H - \lambda\right) &= \operatorname{ran}\left(\mathcal{M}[H] - \mathcal{M}[\lambda]\right). \end{aligned} \tag{1.1.20}$$

If, in addition, β = 0, then it follows from (1.1.16) that

$$\operatorname{mult}\mathcal{M}[H] = \ker\left(H + \alpha\beta^{-1}\right), \qquad \operatorname{mult}H = \ker\left(\mathcal{M}[H] - \delta\beta^{-1}\right),$$

and in the case β = 0 it is easy to see that mul M[H] = mul H.

**Proposition 1.1.11.** Let H be a linear relation in H and let M be a 2 × 2 matrix as in (1.1.15) with det M = 0. Then for α + λβ = 0

$$\left(\mathcal{M}[H] - \mathcal{M}[\lambda]\right)^{-1} = \frac{(\alpha + \lambda\beta)\beta}{\det \mathcal{M}} + \frac{(\alpha + \lambda\beta)^2}{\det \mathcal{M}} (H - \lambda)^{-1}.\tag{1.1.21}$$

Proof. Use the abbreviation Δ = det M. It suffices to see that the left-hand side corresponds to the matrix

$$
\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ -\mathcal{M}[\lambda] & 1 \end{pmatrix} \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} = \begin{pmatrix} \frac{-\lambda\Delta}{\alpha + \lambda\beta} & \frac{\Delta}{\alpha + \lambda\beta} \\\alpha & \beta \end{pmatrix},
$$

while the right-hand side corresponds to the matrix

$$
\begin{pmatrix} 1 & 0 \\ \frac{(\alpha + \lambda \beta)\beta}{\Delta} & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & \frac{(\alpha + \lambda \beta)^2}{\Delta} \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ -\lambda & 1 \end{pmatrix} = \begin{pmatrix} -\lambda & 1 \\ \frac{\alpha(\alpha + \lambda \beta)}{\Delta} & \frac{\beta(\alpha + \lambda \beta)}{\Delta} \end{pmatrix} \cdot \vec{\lambda}
$$

Since these matrices coincide up to a nonzero multiplicative constant the assertion follows. -

It is clear that the following useful consequence of Proposition 1.1.11 is obtained by means of the special choice

$$
\mathfrak{M} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},
$$

so that det <sup>M</sup> <sup>=</sup> <sup>−</sup>1, <sup>M</sup>[H] = <sup>H</sup>−<sup>1</sup>, and <sup>M</sup>[λ]=1/λ, <sup>λ</sup> = 0.

**Corollary 1.1.12.** Let <sup>H</sup> be a linear relation in <sup>H</sup> and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ {0}. Then

$$(H^{-1} - \lambda^{-1})^{-1} = -\lambda - \lambda^2 \left(H - \lambda\right)^{-1}.\tag{1.1.22}$$

Next the Cayley transform and inverse Cayley transform of a linear relation will be introduced. These special M¨obius transforms will be used later in Sections 1.5, 1.6, and 1.7.

**Definition 1.1.13.** Let <sup>H</sup> and <sup>V</sup> be linear relations in <sup>H</sup> and let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then the Cayley transform C<sup>μ</sup> of H and the inverse Cayley transform F<sup>μ</sup> of V are defined by

$$\begin{aligned} \mathcal{C}\_{\mu}[H] &= \left\{ \{h' - \mu h, h' - \overline{\mu}h\} : \{h, h'\} \in H \right\}, \\ \mathcal{F}\_{\mu}[V] &= \left\{ \{k - k', \overline{\mu}k - \mu k'\} : \{k, k'\} \in V \right\}. \end{aligned} \tag{1.1.23}$$

Notice that the domain and range of the Cayley transform C<sup>μ</sup> and the inverse Cayley transform F<sup>μ</sup> are given by

$$\begin{aligned} \text{dom}\,\mathbb{C}\_{\mu}[H] &= \text{ran}\,(H - \mu), & \text{ran}\,\mathbb{C}\_{\mu}[H] &= \text{ran}\,(H - \overline{\mu}),\\ \text{dom}\,\mathcal{F}\_{\mu}[V] &= \text{ran}\,(I - V), & \text{ran}\,\mathcal{F}\_{\mu}[V] &= \text{ran}\,(\overline{\mu} - \mu V). \end{aligned} \tag{1.1.24}$$

It is clear that the Cayley transform C<sup>μ</sup> and the inverse Cayley transform F<sup>μ</sup> are M¨obius transforms corresponding to the matrices

$$\mathcal{C}\_{\mu} = \begin{pmatrix} -\mu & 1\\ -\overline{\mu} & 1 \end{pmatrix} \quad \text{and} \quad \mathcal{F}\_{\mu} = \begin{pmatrix} 1 & -1\\ \overline{\mu} & -\mu \end{pmatrix} = (\overline{\mu} - \mu)\mathcal{C}\_{\mu}^{-1}, \tag{1.1.25}$$

where det C<sup>μ</sup> = μ − μ was used. Note also that

$$\mathcal{C}\_{\mu}[\lambda] = \frac{\lambda - \overline{\mu}}{\lambda - \mu}, \quad \lambda \neq \mu.$$

Thus, Proposition 1.1.11 leads to the following result.

**Corollary 1.1.14.** Let <sup>H</sup> be a linear relation in <sup>H</sup> and let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then

$$\left(\mathcal{C}\_{\mu}[H] - \mathcal{C}\_{\mu}[\lambda]\right)^{-1} = \frac{\lambda - \mu}{\overline{\mu} - \mu} + \frac{(\lambda - \mu)^{2}}{\overline{\mu} - \mu}(H - \lambda)^{-1}, \quad \lambda \neq \mu. \tag{1.1.26}$$

## **1.2 Spectra, resolvent sets, and points of regular type**

The resolvent set, spectrum, point, continuous, and residual spectrum, and the points of regular type of a linear relation or operator are defined. A priori it is not assumed that the linear relation is closed. Here and in the rest of the text linear relations will be referred to simply as relations and linear subspaces as subspaces.

**Definition 1.2.1.** Let <sup>H</sup> be a relation in <sup>H</sup>. Then <sup>λ</sup> <sup>∈</sup> <sup>C</sup> is said to be a point of regular type of <sup>H</sup> if (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a (in general not everywhere defined) bounded operator. The set of points of regular type of H is denoted by γ(H).

Some straightforward consequences of Definition 1.2.1 are presented in the next lemma.

**Lemma 1.2.2.** Let H be a relation in H. Then λ ∈ γ(H) if and only if there exists a positive constant c, depending on λ, such that

$$\|h\| \le c \|h' - \lambda h\|, \quad \{h, h'\} \in H. \tag{1.2.1}$$

Moreover, if γ(H) = ∅, then H is closed if and only if ran (H − λ) is closed for some, and hence for all λ ∈ γ(H).

Proof. Assume that <sup>λ</sup> <sup>∈</sup> <sup>γ</sup>(H), so that (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a bounded operator. Let {h, h- } ∈ H; then {h-<sup>−</sup> λh, h} ∈ (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and

$$\|h\| = \|(H - \lambda)^{-1}(h' - \lambda h)\| \le c \|h' - \lambda h\|,$$

which gives (1.2.1). Conversely, assume that (1.2.1) holds. To see that (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a bounded operator let {f,f- } ∈ (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1. Then {f,f- } = {h- − λh, h} for some {h, h- } ∈ H and (1.2.1) shows f- ≤ c f for all {f,f- } ∈ (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1. This implies that (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is an operator that is bounded or, equivalently, <sup>λ</sup> <sup>∈</sup> <sup>γ</sup>(H).

Assume that <sup>H</sup> is closed, so that also (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is closed. Then the relation (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a closed and bounded operator for all <sup>λ</sup> <sup>∈</sup> <sup>γ</sup>(H). This immediately implies that ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) = dom (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is closed; cf. Lemma 1.1.5. Conversely, if ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) = dom (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is closed for some <sup>λ</sup> <sup>∈</sup> <sup>γ</sup>(H), then (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a bounded operator defined on a closed subspace. It follows that (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is closed, cf. Lemma 1.1.5, and hence H is closed. -

**Definition 1.2.3.** Let <sup>H</sup> be a relation in <sup>H</sup>. A point <sup>λ</sup> <sup>∈</sup> <sup>C</sup> is said to belong to the resolvent set <sup>ρ</sup>(H) of <sup>H</sup> if (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a bounded operator and ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) = <sup>H</sup>. The spectrum σ(H) of H is the complement of ρ(H) in C. The spectrum σ(H) decomposes into three disjoint components: the point spectrum σp(H), continuous spectrum σc(H), and residual spectrum σr(H), defined by

$$\begin{aligned} \sigma\_{\mathbb{P}}(H) &= \left\{ \lambda \in \mathbb{C} : \ker \left( H - \lambda \right) \neq \{ 0 \} \right\}, \\ \sigma\_{\mathbb{C}}(H) &= \left\{ \lambda \in \mathbb{C} : \ker \left( H - \lambda \right) = \{ 0 \}, \ \overline{\text{ran}} \left( H - \lambda \right) = \mathfrak{H}, \ \lambda \notin \rho(H) \right\}, \\ \sigma\_{\mathbb{P}}(H) &= \left\{ \lambda \in \mathbb{C} : \ker \left( H - \lambda \right) = \{ 0 \}, \ \overline{\text{ran}} \left( H - \lambda \right) \neq \mathfrak{H} \right\}. \end{aligned}$$

Let H be a relation in H. It follows from Definition 1.2.1 and Definition 1.2.3 that ρ(H) ⊂ γ(H). Moreover, it follows from (1.2.1) that γ(H) = γ(H) and the equivalence

ran (H − λ) = H ⇔ ran (H − λ) = H

implies ρ(H) = ρ(H).

The following state diagram is useful when discussing the spectral subsets and the resolvent set of H. The top row shows all possibilities for the range of <sup>H</sup>−λ. The first (second) rows show all possibilities for points <sup>λ</sup> such that (H−λ)−<sup>1</sup> is a bounded (unbounded) operator and the bottom row shows all possibilities for eigenvalues λ.


Now assume that H is a closed relation. Then it follows with the help of the closed graph theorem and Lemma 1.1.5 applied to the operator (<sup>H</sup> <sup>−</sup>λ)−<sup>1</sup> that two cases (marked by **X** below) in the above state diagram are not possible:


In particular, for a closed relation the continuous spectrum is given by

$$\sigma\_{\mathbb{C}}(H) = \left\{ \lambda \in \mathbb{C} : \ker \left( H - \lambda \right) = \{ 0 \}, \, \overline{\operatorname{ran}} \left( H - \lambda \right) = \mathfrak{H}, \, \operatorname{ran} \left( H - \lambda \right) \neq \mathfrak{H} \right\}.$$

**Lemma 1.2.4.** A relation H in H is closed if and only if ran (H −λ) = H for some, and hence for all λ ∈ ρ(H). In this case the following statements hold:


Proof. If <sup>H</sup> is closed, then for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> also (H−λ)−<sup>1</sup> is closed. Hence, if <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(H), then (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a bounded and closed operator, and therefore

;

$$\text{dom}\,(H-\lambda)^{-1} = \text{ran}\,(H-\lambda)$$

is closed and coincides with H; cf. Lemma 1.1.5. Conversely, if λ ∈ ρ(H) and ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) = <sup>H</sup>, then (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a bounded operator defined on <sup>H</sup> and hence (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is closed by Lemma 1.1.5. This implies that also <sup>H</sup> is closed. Assertion (i) is now immediate and assertion (ii) follows from (1.1.13) in Lemma 1.1.8. -

In the next theorem the so-called defect of a relation H is studied. The proof uses the notions of opening and gap of closed subspaces from Appendix C.

**Theorem 1.2.5.** Let H be a relation in H. Then the set γ(H) of points of regular type of H is an open subset of C and the defect

$$n\_{\lambda}(H) := \dim \left( \text{ran} \left( H - \lambda \right) \right)^{\perp} \tag{1.2.2}$$

of H is constant for all λ in a connected component of γ(H).

Proof. Step 1. Let μ ∈ γ(H) and let c<sup>μ</sup> > 0 be any positive constant such that

$$\|h\| \le c\_{\mu} \|h' - \mu h\|, \qquad \{h, h'\} \in H; \tag{1.2.3}$$

cf. Lemma 1.2.2. Hence, if <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and <sup>|</sup><sup>λ</sup> <sup>−</sup> <sup>μ</sup>|c<sup>μ</sup> <sup>&</sup>lt; 1, then

$$|\lambda - \mu| \|h\| \le |\lambda - \mu| c\_{\mu} \, ||h' - \mu h|| < ||h' - \mu h||.$$

In this case h- − λh = h-− μh − (λ − μ)h yields

$$||h' - \lambda h|| \ge ||h' - \mu h|| - |\lambda - \mu|| |h|| > 0,$$

and together with (1.2.3) this leads to

$$\begin{aligned} c\_{\mu} ||h' - \lambda h|| &\geq c\_{\mu} ||h' - \mu h|| - |\lambda - \mu| c\_{\mu} ||h|| \\ &\geq ||h|| - |\lambda - \mu| c\_{\mu} ||h|| \\ &= \left(1 - |\lambda - \mu| c\_{\mu}\right) ||h||. \end{aligned}$$

Since all elements of (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> are of the form {h- − λh, h} with {h, h- } ∈ H, it follows from this inequality that (<sup>H</sup> <sup>−</sup>λ)−<sup>1</sup> is a bounded operator. In fact, one has for |λ − μ|c<sup>μ</sup> < 1

$$\|(H-\lambda)^{-1}g\| \le \frac{c\_{\mu}}{1-|\lambda-\mu|c\_{\mu}}\|g\|, \qquad g \in \text{dom}\left(H-\lambda\right)^{-1}.\tag{1.2.4}$$

In particular, it follows that λ ∈ γ(H) for |λ − μ|c<sup>μ</sup> < 1. Therefore, γ(H) is an open subset of C.

Step 2. Let μ ∈ γ(H) and let P<sup>μ</sup> be the orthogonal projection onto ran (H − μ). For each f ∈ H one obtains

$$\|\|P\_{\mu}f\|\| = \sup\_{g \in \mathsf{Tant}(H-\mu)} \frac{| (P\_{\mu}f, g) |}{\|g\|} = \sup\_{\{h, h'\} \in H \backslash \{0, 0\}} \frac{| (f, h' - \mu h) |}{\|h' - \mu h\|},$$

since ran (H − μ) = {h- − μh : {h, h- } ∈ <sup>H</sup>}. Now choose <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and write

$$h' - \mu h = h' - \lambda h + (\lambda - \mu)h.$$

If, in particular, f ∈ ran (H −λ)⊥, then |(f,h-−μh)| = |λ−μ||(f,h)| and it follows that

$$\begin{aligned} \|P\_{\mu}f\| &= |\lambda - \mu| \sup\_{\{h, h'\} \in H\backslash\{0, 0\}} \frac{|\langle f, h \rangle|}{\|h' - \mu h\|} \\ &\le |\lambda - \mu| \, \||f\| \sup\_{\{h, h'\} \in H\backslash\{0, 0\}} \frac{|\|h\|}{\|h' - \mu h\|}. \end{aligned}$$

Let c<sup>μ</sup> be as in Step 1, so that h ≤ c<sup>μ</sup> h- − μh for {h, h- } ∈ H; cf. (1.2.3). Thus, for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> one has

$$\|\|P\_{\mu}f\|\| \le |\lambda - \mu|c\_{\mu}||f||, \quad f \in \text{ran}\,(H-\lambda)^{\perp}.\tag{1.2.5}$$

Step 3. Let μ ∈ γ(H) and |λ − μ|c<sup>μ</sup> < 1. By Step 1, λ ∈ γ(H). Therefore, by symmetry, one obtains from Step 2 that

$$\|\|P\_{\lambda}g\|\| \le |\lambda - \mu|c\_{\lambda}\|\|g\|, \quad g \in \text{ran}\,(H - \mu)^{\perp},\tag{1.2.6}$$

where c<sup>λ</sup> is any positive constant such that

$$\|(H - \lambda)^{-1}k\| \le c\lambda \|k\| \quad \text{for} \quad k \in \text{dom}\left(H - \lambda\right)^{-1}.$$

Due to the estimate (1.2.4) one may take

$$c\_{\lambda} = \frac{c\_{\mu}}{1 - |\lambda - \mu| \, c\_{\mu}},$$

and then one concludes from the estimate (1.2.6) that

$$\|\|P\_{\lambda}g\|\| \le \frac{|\lambda - \mu|c\_{\mu}}{1 - |\lambda - \mu|c\_{\mu}} \|g\|, \quad g \in \text{ran}\left(H - \mu\right)^{\perp},\tag{1.2.7}$$

for |λ − μ|c<sup>μ</sup> < 1.

Step 4. Let <sup>μ</sup> <sup>∈</sup> <sup>γ</sup>(H) and assume that <sup>|</sup>λ−μ|c<sup>μ</sup> < C for some number 0 <C< <sup>1</sup> 2 . Then λ ∈ γ(H) by Step 1, and

$$\begin{aligned} \|P\_{\mu}f\| &\leq C\|f\|, \quad f \in \text{ran}\,(H-\lambda)^{\perp},\\ \|P\_{\lambda}g\| &\leq \frac{C}{1-C}\|g\|, \quad g \in \text{ran}\,(H-\mu)^{\perp}, \end{aligned}$$

by (1.2.5) and (1.2.7) in Step 2 and Step 3. Therefore,

$$\omega\left(\overline{\text{ran}}\left(H-\mu\right), \text{ran}\left(H-\lambda\right)^{\perp}\right) = \left\|P\_{\mu}(I-P\_{\lambda})\right\| \le C < 1$$

and

$$\|\omega(\overline{\text{ran}}\,(H-\lambda),\text{ran}\,(H-\mu)^\perp) = \|P\_\lambda(I-P\_\mu)\| \le \frac{C}{1-C} < 1,$$

where ω stands for the opening between closed linear subspaces; cf. Definition C.5. For the gap in Definition C.9 one obtains

$$g\left(\overline{\text{var}}\left(H-\mu\right), \overline{\text{var}}\left(H-\lambda\right)\right) < 1$$

from Proposition C.10, and hence Theorem C.12 applied to the closed linear subspaces M = ran (H − μ) and N = ran (H − λ)<sup>⊥</sup> implies

$$\dim\left(\text{ran}\left(H-\lambda\right)\right)^{\perp} = \dim\left(\text{ran}\left(H-\mu\right)\right)^{\perp}\tag{1.2.8}$$

for <sup>μ</sup> <sup>∈</sup> <sup>γ</sup>(H) and <sup>|</sup><sup>λ</sup> <sup>−</sup> <sup>μ</sup>|c<sup>μ</sup> < C for some 0 <C< <sup>1</sup> 2 .

Step 5. Now let Γ be a connected open component of γ(H). Then Γ is arcwise connected and each pair of points {λ1, λ2} in Γ can be connected by a (piecewise) connected compact curve. Each point μ of the curve is the center of an open disc such that (1.2.8) holds for all λ in the disc. By compactness, finitely many such open discs form a cover of the curve and hence

$$\dim\left(\text{ran}\left(H-\lambda\_1\right)\right)^\perp = \dim\left(\text{ran}\left(H-\lambda\_2\right)\right)^\perp,$$

that is, the defect of H is constant in each connected component of γ(H). -

The next theorem is concerned with the properties of points in the resolvent set of a relation. This leads to the resolvent identity.

**Theorem 1.2.6.** Let H be a relation in H. The resolvent set ρ(H) is an open subset of C. The resolvent identity

$$(H - \lambda)^{-1} - (H - \mu)^{-1} = (H - \lambda)^{-1}(\lambda - \mu)(H - \mu)^{-1} \tag{1.2.9}$$

holds for λ, μ <sup>∈</sup> <sup>ρ</sup>(H); here (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and (<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup> are bounded operators defined on ran (H − λ) and ran (H − μ), respectively. If, in addition, H is closed, then (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1,(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) for λ, μ <sup>∈</sup> <sup>ρ</sup>(H), and (1.2.9) can be written as

$$(H - \lambda)^{-1} - (H - \mu)^{-1} = (\lambda - \mu)(H - \lambda)^{-1}(H - \mu)^{-1} \tag{1.2.10}$$

for all λ, μ ∈ ρ(H).

Proof. Recall that the inclusion ρ(H) ⊂ γ(H) holds. In fact, the resolvent set ρ(H) of H is made up of the components of γ(H) where the defect nλ(H) in (1.2.2) is zero. It follows in the same way as in Step 1 of the proof of Theorem 1.2.5 that for <sup>μ</sup> <sup>∈</sup> <sup>ρ</sup>(H) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> such that <sup>|</sup><sup>λ</sup> <sup>−</sup> <sup>μ</sup><sup>|</sup>(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup> < 1 one has λ ∈ ρ(H), and hence ρ(H) is open. The identity (1.2.9) follows from Proposition 1.1.7. - **Corollary 1.2.7.** Let H be a closed relation in H and assume that μ ∈ ρ(H) and |λ − μ| (<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup> < 1. Then λ ∈ ρ(H) and

$$(H - \lambda)^{-1} = \sum\_{n=0}^{\infty} (\lambda - \mu)^n (H - \mu)^{-(n+1)},\tag{1.2.11}$$

where the series converges in **B**(H). In particular, the mapping

$$
\lambda \mapsto (H - \lambda)^{-1}
$$

is holomorphic on ρ(H) and the limit

$$\lim\_{\lambda \to \mu} \frac{(H - \lambda)^{-1} - (H - \mu)^{-1}}{\lambda - \mu} = (H - \mu)^{-2}$$

exists in **B**(H).

Proof. With the notation <sup>R</sup>(λ)=(<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> it follows from the resolvent identity (1.2.10) and induction that

$$R(\lambda) = \sum\_{n=0}^{k} (\lambda - \mu)^n R(\mu)^{n+1} + (\lambda - \mu)^{k+1} R(\lambda) R(\mu)^{k+1}.\tag{1.2.12}$$

The last term on the right-hand side of (1.2.12) obeys the estimate

$$\| (\lambda - \mu)^{k+1} R(\lambda) R(\mu)^{k+1} \| \le \| R(\lambda) \| \left( |\lambda - \mu| \| R(\mu) \| \right)^{k+1},$$

and hence the condition |λ − μ| R(μ) < 1 implies that it tends to 0 in **B**(H) as <sup>k</sup> → ∞. This implies (1.2.11) and the holomorphy of <sup>λ</sup> → (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1. The last assertion follows from (1.2.10). -

**Corollary 1.2.8.** Let H be a closed relation in H and let λ, μ ∈ ρ(H). Then the operator <sup>I</sup> + (<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)(<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) is invertible and

$$\left(I + (\lambda - \mu)(H - \lambda)^{-1}\right)^{-1} = I + (\mu - \lambda)(H - \mu)^{-1}.$$

Proof. The formal identity in terms of relations follows from Lemma 1.1.9. Since both <sup>I</sup> + (λ−μ)(<sup>H</sup> <sup>−</sup>λ)−<sup>1</sup> and <sup>I</sup> + (μ−λ)(<sup>H</sup> <sup>−</sup>μ)−<sup>1</sup> belong to **<sup>B</sup>**(H), the assertion is clear. -

The resolvent identity in (1.2.10) characterizes the closed relation H in a specific way.

**Proposition 1.2.9.** Let <sup>E</sup> <sup>⊂</sup> <sup>C</sup> be a nonempty set and assume that the mapping λ → B(λ) from E to **B**(H) satisfies the identity

$$B(\lambda) - B(\mu) = (\lambda - \mu)B(\lambda)B(\mu), \quad \lambda, \mu \in \mathcal{E}. \tag{1.2.13}$$

Then there exists a closed relation H in H such that E ⊂ ρ(H) and

$$B(\lambda) = (H - \lambda)^{-1}, \quad \lambda \in \mathcal{E}.$$

Proof. Define for λ ∈ E the relation H(λ) by

$$H(\lambda) = B(\lambda)^{-1} + \lambda.$$

Since <sup>B</sup>(λ) <sup>∈</sup> **<sup>B</sup>**(H), one sees that <sup>B</sup>(λ) and thus also <sup>B</sup>(λ)−<sup>1</sup> are closed. Hence, also the relation H(λ) is closed. Note that

$$\left(\text{ran}\left(H(\lambda)-\lambda\right)\right) = \text{ran}\,B(\lambda)^{-1} = \text{dom}\,B(\lambda) = \mathfrak{H},$$

while

$$\ker\left(H(\lambda) - \lambda\right) = \ker B(\lambda)^{-1} = \text{mul } B(\lambda) = \{0\},$$

so that λ ∈ ρ(H(λ)).

Now let λ, μ ∈ E and let {h, h- } ∈ H(λ). Then h = B(λ)(h- − λh) and due to the identity (1.2.13) (with μ and λ interchanged) one gets

$$\begin{aligned} h &= B(\lambda)(h' - \lambda h) \\ &= B(\mu)(h' - \lambda h) - (\mu - \lambda)B(\mu)B(\lambda)(h' - \lambda h) \\ &= B(\mu)(h' - \mu h). \end{aligned}$$

This implies {h, h- } ∈ H(μ). Therefore, H(λ) ⊂ H(μ) which, by symmetry, leads to H(λ) = H(μ). One concludes that H(λ) does not depend on λ ∈ E. Thus, one sees that

$$(H - \lambda)^{-1} = B(\lambda) \quad \text{and} \quad \lambda \in \rho(H),$$

which completes the proof. -

Let again H be a relation in H, let M be a 2 × 2 matrix as in (1.1.15) such that det M = 0, and let

$$\mathcal{M}[H] = \left\{ \{\alpha h + \beta h', \gamma h + \delta h'\} \, : \, \{h, h'\} \in H \right\}$$

be the corresponding M¨obius transform of H in Definition 1.1.10. The question is how the spectrum of H behaves under the M¨obius transformation. Let the function M[λ] be defined by (1.1.19).

**Proposition 1.2.10.** Let H be a relation in H and let M be a 2 × 2 matrix as in (1.1.15) with det M = 0. Then the following statements hold for α + λβ = 0:


If the relation H is closed and the equivalent assertions in (i) hold, then the operators in the identity (1.1.21) belong to **B**(H).

$$\Box$$

Proof. (i) Assume that <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(H), that is, ran (H−λ) is dense in <sup>H</sup> and (H−λ)−<sup>1</sup> is a bounded operator. Then the identities in (1.1.20) imply that ran (M[H] − M[λ]) is dense in <sup>H</sup> and that (M[H] <sup>−</sup> <sup>M</sup>[λ])−<sup>1</sup> is an operator. It follows with (1.1.21) that (M[H] <sup>−</sup> <sup>M</sup>[λ])−<sup>1</sup> is a bounded operator. This shows <sup>M</sup>[λ] <sup>∈</sup> <sup>ρ</sup>(M[H]). The converse statement follows by applying M−<sup>1</sup>.

(ii) and (iii) are now straightforward consequences from (1.1.20), (1.1.21), and the above considerations. -

Let <sup>H</sup> be a relation in <sup>H</sup> and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Then it follows from Proposition 1.2.10 that for λ = 0

$$
\lambda \in \rho(H) \quad \Leftrightarrow \quad \lambda^{-1} \in \rho(H^{-1}), \tag{1.2.14}
$$

in which case the resolvent operators in (1.1.22) belong to **B**(H). Likewise, it follows from Proposition 1.2.10 that for λ = μ

$$
\lambda \in \rho(H) \quad \Leftrightarrow \quad \mathbb{C}\_{\mu}[\lambda] \in \rho(\mathbb{C}\_{\mu}[H]),
$$

in which case the resolvent operators in (1.1.26) belong to **B**(H).

## **1.3 Adjoint relations**

Here the adjoint of a relation will be introduced, again as a relation, which will be automatically linear and closed. If the original relation is the graph of an operator, its adjoint will be the graph of an operator precisely when the original operator is densely defined.

**Definition 1.3.1.** Let H be a relation from H to K. The adjoint H<sup>∗</sup> of H is defined as a relation from K to H by

$$H^\* := \left\{ \{f, f'\} \in \mathfrak{K} \times \mathfrak{H} \; : \; (f', h)\_{\mathfrak{H}} = (f, h')\_{\mathfrak{K}} \; \; \text{for all } \; \{h, h'\} \in H \right\}.$$

Let J be the flip-flop operator from H × K to K × H defined by

$$J\{f, f'\} = \{f', -f\}, \qquad \{f, f'\} \in \mathfrak{H} \times \mathfrak{K}.\tag{1.3.1}$$

Then it is clear from Definition 1.3.1 that

$$H^\* = (JH)^\perp = JH^\perp,\tag{1.3.2}$$

where the orthogonal complements refer to the componentwise inner product in K × H and H × K, respectively. Note that

$$\mathfrak{K} \times \mathfrak{H} = \overline{JH} \oplus (JH)^{\perp} \quad \text{and} \quad \mathfrak{H} \times \mathfrak{K} = \overline{H} \oplus H^{\perp}.$$

Clearly, if H and K are relations with H ⊂ K, then K<sup>∗</sup> ⊂ H∗. It also follows from (1.3.2) that H<sup>∗</sup> is a closed linear relation from K to H. Note that (1.3.2) gives J−1H<sup>∗</sup> = H⊥, i.e.,

$$(J^{-1}H^\*)^\perp = H^{\perp \perp} = \overline{H}.$$

Since <sup>J</sup>−<sup>1</sup> is the flip-flop operator from <sup>K</sup>×<sup>H</sup> to <sup>H</sup>×K, the left-hand side coincides with H∗∗ and hence

$$H^{\ast \ast} = H,$$

so that the double adjoint of H gives the closure of H in H × K. As a byproduct, one obtains H<sup>∗</sup> = (H)∗. It follows directly from the definition that

$$(H^\*)^{-1} = (H^{-1})^\*,\tag{1.3.3}$$

and sometimes the notation H−∗ := (H∗)−<sup>1</sup> = (H−1)<sup>∗</sup> will be used. These facts and some further elementary properties of adjoint relations are collected in the next proposition.

**Proposition 1.3.2.** Let H be a relation from H to K. Then the following statements hold:


It is a direct consequence of Proposition 1.3.2 that

$$
\overline{\text{dom}}\,H = \overline{\text{dom}}\,H^{\*\*} \quad \text{and} \quad \overline{\text{var}}\,H = \overline{\text{var}}\,H^{\*\*}.
$$

The domain and range of the adjoint relation can be characterized as follows.

**Lemma 1.3.3.** Let H be a relation from H to K. Then dom H<sup>∗</sup> ⊂ K and ran H<sup>∗</sup> ⊂ H are characterized by

$$\text{dom}\,H^\* = \left\{ f \in (\text{mu}\,\overline{H})^\perp \, : \, |(f, h')| \le M\_f ||h|| \text{ for all } \{h, h'\} \in H \right\},$$

and

$$\operatorname{ran} H^\* = \left\{ f' \in (\ker \overline{H})^\perp \, : \, |(f', h)| \le M\_{f'} \|h'\| \text{ for all } \{h, h'\} \in H \right\},$$

where M<sup>f</sup> and Mf are nonnegative constants depending on f and f- , respectively.

Proof. The first identity will be proved; the second identity follows from the first one by using <sup>H</sup>−<sup>1</sup> instead of <sup>H</sup>, and (1.3.3). So let <sup>f</sup> <sup>∈</sup> dom <sup>H</sup>∗. Then there exists an element f- ∈ H with {f,f- } ∈ H∗. For {0, h- } ∈ H there exists {hn, h- <sup>n</sup>} ∈ H with {hn, h- n}→{0, h- }. Hence, it follows from

$$(f', h\_n) = (f, h\_n')$$

that (f,h- ) = 0. Thus, f ⊥ mul H. Furthermore, for all {h, h- } ∈ H it follows that |(f,h- )| = |(f- , h)| ≤ M<sup>f</sup> h . Hence, dom H<sup>∗</sup> is contained in the right-hand side.

To prove the converse inclusion, let f belong to the right-hand side. Since <sup>f</sup> <sup>∈</sup> (mul <sup>H</sup>)⊥, the linear relation from <sup>H</sup> to <sup>C</sup> given by

$$\Phi = \left\{ \{h, (h', f)\} \, : \, \{h, h'\} \in H \right\} \,, \quad \text{dom}\, \Phi = \text{dom}\, H,$$

is the graph of a linear functional, which is bounded because

$$|(h',f)| \le M\_f ||h|| \text{ for all } \{h,h'\} \in H.$$

Its closure Φ is a bounded linear functional on dom H, and by the Riesz representation theorem there exists an element f-∈ dom H such that

$$
\overline{\Phi}h = (h, f'), \quad h \in \overline{\text{dom}}\, H.
$$

In particular, this shows that (h- , f)=(h, f- ) for all {h, h- } ∈ H, which means that {f,f- } ∈ <sup>H</sup>∗. -

Proposition 1.3.2 and Lemma 1.3.3 immediately yield the following corollary.

**Corollary 1.3.4.** Let H be a relation from H to K. Then the following statements hold:


Proof. Items (i) and (ii) are immediate from Proposition 1.3.2. To prove (iii), assume H ∈ **B**(H, K). Since dom H = H it follows that H<sup>∗</sup> is a (closed) operator. Moreover, since mul H = {0} and H is bounded, Lemma 1.3.3 shows that dom <sup>H</sup><sup>∗</sup> <sup>=</sup> <sup>K</sup>. Now the closed graph theorem implies <sup>H</sup><sup>∗</sup> <sup>∈</sup> **<sup>B</sup>**(K, <sup>H</sup>). -

Occasionally the following situation comes up. Let M ⊂ H and N ⊂ K be (not necessarily closed) linear subspaces and let H = M × N. Then

$$H^\* = \left(J(\mathfrak{M} \times \mathfrak{N})\right)^\perp = (\mathfrak{N} \times \mathfrak{M})^\perp = \mathfrak{N}^\perp \times \mathfrak{M}^\perp. \tag{1.3.4}$$

Note that by the same argument H∗∗ = M⊥⊥ ×N⊥⊥ = M×N, which is of course clear from H∗∗ = H.

Let H and K be closed linear relations from H to K. Then the componentwise sum <sup>H</sup> <sup>+</sup> <sup>K</sup> is closed if and only if <sup>H</sup><sup>⊥</sup> <sup>+</sup> <sup>K</sup><sup>⊥</sup> is closed (see Theorem C.3). Since H<sup>∗</sup> = JH<sup>⊥</sup> and K<sup>∗</sup> = JK⊥, this implies

$$H \stackrel\cdot K \text{ closed} \quad \Leftrightarrow \quad H^\* \stackrel\cdot K^\* \text{ closed.} \tag{1.3.5}$$

The next theorem is a variant of the closed range theorem in the general context of linear relations.

**Theorem 1.3.5.** Let H be a closed relation from H to K. Then the following statements hold:


Proof. Since H and {0} × K are closed linear subspaces in H × K, it follows from the equivalence (1.3.5) that

$$H \stackrel{\cdot}{+} (\{0\} \times \mathfrak{K}) = \text{dom}\, H \times \mathfrak{K}$$

is closed if and only if

$$H^\* \ddot{+} (\{0\} \times \mathfrak{H}) = \text{dom} \, H^\* \times \mathfrak{H}$$

is closed; cf. (1.3.4). This implies that dom H is closed if and only if dom H<sup>∗</sup> is closed, that is, (i) holds. Assertion (ii) follows immediately by applying (i) to the inverse H−1. -

An operator H from H to K is unitary if H is isometric and dom H = H and ran H = K. The next result gives criteria for a relation from H to K in terms of its adjoint to be the graph of an isometric or unitary operator.

**Lemma 1.3.6.** Let H be a relation from H to K. Then the following statements hold:


Proof. (i) Assume that <sup>H</sup>−<sup>1</sup> <sup>⊂</sup> <sup>H</sup>∗. For {h, h- } ∈ H one has {h- , h} ∈ <sup>H</sup>−<sup>1</sup> <sup>⊂</sup> <sup>H</sup><sup>∗</sup> which implies h = h- for {h, h- } ∈ H. This shows that H is an isometric operator. Conversely, let H be an isometric operator and {h- , h} ∈ <sup>H</sup>−1. Then {h, h- } ∈ H and one has (h, k)=(h- , k- ) for all {k, k- } ∈ H by polarization. This implies {h- , h} ∈ <sup>H</sup><sup>∗</sup> and hence <sup>H</sup>−<sup>1</sup> <sup>⊂</sup> <sup>H</sup>∗.

(ii) Assume that H−<sup>1</sup> = H∗. Then H is closed and by (i) H is an isometric operator. Therefore, dom H is closed by Lemma 1.1.5, and

$$(\text{dom}\,H)^\perp = \text{mult}\,H^\* = \text{mult}\,H^{-1} = \text{ker}\,H = \{0\}^\perp$$

implies dom H = H. Note that H−<sup>1</sup> satisfies (H−1)−<sup>1</sup> = (H∗)−<sup>1</sup> = (H−1)<sup>∗</sup> and hence by the above argument dom H−<sup>1</sup> = K. This implies ran H = K and it follows that H is a unitary operator. Conversely, assume that the operator H is unitary. Then <sup>H</sup> <sup>∈</sup> **<sup>B</sup>**(H, <sup>K</sup>), <sup>H</sup>−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(K, <sup>H</sup>), and <sup>H</sup><sup>∗</sup> <sup>∈</sup> **<sup>B</sup>**(K, <sup>H</sup>) by Corollary 1.3.4. Since <sup>H</sup> is isometric, one has <sup>H</sup>−<sup>1</sup> <sup>⊂</sup> <sup>H</sup><sup>∗</sup> by (i) and equality follows as both <sup>H</sup>−<sup>1</sup> and H<sup>∗</sup> belong to **B**(K, H). -

Unitary operators are often used to identify different relations or Hilbert spaces.

**Definition 1.3.7.** Let H be a relation in H and let K be a relation in K. Then H and K are said to be unitarily equivalent if there exists a unitary operator U ∈ **B**(H, K) such that K = UHU<sup>∗</sup> or, equivalently,

$$K = \left\{ \{Uh, Uh'\} : \{h, h'\} \in H \right\}.\tag{1.3.6}$$

Assume that the relations H in H and K in K satisfy (1.3.6). Then one has {k, k- } ∈ K<sup>∗</sup> if and only if (U∗k- , h)=(U∗k, h- ) for all {h, h- } ∈ H, that is, {U∗k,U∗k- } ∈ H∗. By setting {k, k- } = {Uh, Uh- } it also follows from this that {Uh, Uh- } ∈ K<sup>∗</sup> if and only if {h, h- } ∈ H∗. Hence, one has

$$H^\* = \left\{ \{U^\*k, U^\*k'\} : \{k, k'\} \in K^\* \right\}$$

$$K^\* = \left\{ \{Uh, Uh'\} : \{h, h'\} \in H^\* \right\}.\tag{1.3.7}$$

and

**Lemma 1.3.8.** Let H be a closed relation in H, let K be a closed relation in K, assume that ρ(H) ∩ ρ(K) = ∅, and that U ∈ **B**(H, K) is unitary. Then H and K are unitarily equivalent if and only if

$$(K - \lambda)^{-1} = U(H - \lambda)^{-1}U^\* \tag{1.3.8}$$

for some, and hence for all λ ∈ ρ(H) ∩ ρ(K).

Proof. Assume that K = UHU∗. Then for all λ ∈ ρ(H) ∩ ρ(K) one has

$$K - \lambda = U(H - \lambda)U^\*.$$

Taking inverses yields (1.3.8). Conversely, assume that the identity (1.3.8) holds for some λ ∈ ρ(H) ∩ ρ(K). Then

$$H = \left\{ \left\{ (H - \lambda)^{-1} f, (I + \lambda (H - \lambda)^{-1}) f \right\} : f \in \mathfrak{H} \right\}.$$

and

$$K = \left\{ \left\{ (K - \lambda)^{-1} g, (I + \lambda(K - \lambda)^{-1}) g \right\} : g \in \mathfrak{K} \right\}.$$

by Lemma 1.2.4. Therefore,

$$\begin{aligned} K &= \left\{ \{ (K - \lambda)^{-1} U f, (I + \lambda (K - \lambda)^{-1}) U f \} : f \in \mathfrak{H} \right\} \\ &= \left\{ \{ U (H - \lambda)^{-1} f, U (I + \lambda (H - \lambda)^{-1}) f \} : f \in \mathfrak{H} \right\} \\ &= \left\{ \{ U h, U h' \} : \{ h, h' \} \in H \right\} \\ &= U H U^\*, \end{aligned}$$

which completes the argument. -

The next proposition concerns the adjoint of the sum and of the product of relations in Hilbert spaces.

**Proposition 1.3.9.** Let H and K be relations from H to K, and let L be a relation from K to G. Then the following statements hold:


Proof. (i) To show the inclusion H<sup>∗</sup> + K<sup>∗</sup> ⊂ (H + K)<sup>∗</sup> assume that

$$\{f, f' + g'\} \in H^\* + K^\*, \quad \text{where} \quad \{f, f'\} \in H^\*, \ \{f, g'\} \in K^\*.$$

Next consider {h, h- + k- } ∈ H + K, where {h, h- } ∈ H and {h, k- } ∈ K. Then (f- , h)=(f,h- ) and (g- , h)=(f, k- ), and hence

$$(f' + g', h) - (f, h' + k') = (f', h) - (f, h') + (g', h) - (f, k') = 0,$$

that is, {f,f- +g- } ∈ (H +K)∗. Now it will be shown that K ∈ **B**(H, K) implies the inclusion (H +K)<sup>∗</sup> ⊂ H<sup>∗</sup> +K∗. Let {f,f- } ∈ (H +K)∗. Then (f- , h)=(f,h- +k- ) for all {h, h- } ∈ H and {h, k- } ∈ K. Since K ∈ **B**(H, K) and K<sup>∗</sup> ∈ **B**(K, H), it follows that k-= Kh and

$$(f',h) = (f,h') + (f,Kh) = (f,h') + (K^\*f,h),$$

and hence (f- − K∗f,h)=(f,h- ) holds for all {h, h- } ∈ H. Therefore, one sees {f,f- − K∗f} ∈ H<sup>∗</sup> and {f,f- } ∈ H<sup>∗</sup> + K∗.

(ii) First the inclusion H∗L<sup>∗</sup> ⊂ (LH)<sup>∗</sup> will be shown. Let {f,f- } ∈ H∗L∗, so that {f,g- } ∈ L<sup>∗</sup> and {g- , f- } ∈ H<sup>∗</sup> for some g- ∈ K. Consider {h, l- } ∈ LH, where {h, h- } ∈ H and {h- , l- } ∈ L for some h- ∈ K. Then (g- , h- )=(f,l- ) and (f- , h)=(g- , h- ) and hence

$$(f', h) - (f, l') = (g', h') - (g', h') = 0$$

for any {h, l- } ∈ LH. This shows {f,f- } ∈ (LH)∗. Assume now that L ∈ **B**(K, G) and hence L<sup>∗</sup> ∈ **B**(G, K). In order to show the inclusion (LH)<sup>∗</sup> ⊂ H∗L∗, let {f,f- } ∈ (LH)∗. For {h, h- } ∈ H one has {h, Lh- } ∈ LH and hence

$$(f',h)=(f,Lh')=(L^\*f,h').$$

This implies {L∗f,f- } ∈ H<sup>∗</sup> and together with {f,L∗f} ∈ L<sup>∗</sup> one concludes {f,f- } ∈ <sup>H</sup>∗L∗. -

Let <sup>H</sup> be a relation from <sup>H</sup> to <sup>K</sup> and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. The following consequences of Proposition 1.3.9 will prove useful:

$$(\lambda H)^{\*} = \overline{\lambda} H^{\*},$$

and for H = K,

$$(H - \lambda)^\* = H^\* - \overline{\lambda}.$$

Hence, according to Proposition 1.3.2 (iii) one has

ker (H<sup>∗</sup> − λ) = ran (H − λ) <sup>⊥</sup> and ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) = ker (H<sup>∗</sup> − λ) ⊥. (1.3.9) Furthermore, by (1.3.3),

$$\left( (H - \lambda)^{-1} \right)^\* = (H^\* - \overline{\lambda})^{-1}.$$

In the next proposition the connection between the spectra of H and H<sup>∗</sup> is discussed.

**Proposition 1.3.10.** Let <sup>H</sup> be a relation in <sup>H</sup> and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Then the following statements hold:


If, in addition, the relation H is closed, then

(iii) λ ∈ σp(H) and ran (H − λ) = H ⇔ λ ∈ σp(H∗) and ran (H<sup>∗</sup> − λ) = H; (iv) λ ∈ σp(H) and ran (H − λ) = H ⇔ λ ∈ σr(H∗); (v) λ ∈ σc(H) ⇔ λ ∈ σc(H∗).

Proof. (i) & (ii) If <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(H), then (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a bounded operator with dense domain ran (H − λ), and hence it admits a continuous extension

$$\overline{(H-\lambda)^{-1}} = (\overline{H}-\lambda)^{-1} \in \mathbf{B}(\mathfrak{H}).\tag{1.3.10}$$

Thus, also (H<sup>∗</sup> <sup>−</sup>λ)−<sup>1</sup> = ((<sup>H</sup> <sup>−</sup>λ)−1)<sup>∗</sup> <sup>∈</sup> **<sup>B</sup>**(H) and <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(H∗) follows. Conversely, for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(H∗) one has (H<sup>∗</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) since <sup>H</sup><sup>∗</sup> is closed. Hence, also (1.3.10) holds and from this it is clear that (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a bounded operator with dense domain ran (H − λ). This gives (i), and (ii) follows immediately from (i).

(iii)–(v) are direct consequences of (1.3.9). -

In the next lemma it turns out that the scalar M¨obius transform in Definition 1.1.10 behaves under adjoints as scalar multiplication does. In order to formulate the result, let the conjugate of a 2 × 2 matrix M be defined by

$$
\overline{\mathcal{M}} = \begin{pmatrix} \overline{\alpha} & \overline{\beta} \\ \overline{\gamma} & \overline{\delta} \end{pmatrix} \quad \text{when} \quad \mathcal{M} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \dots
$$

The scalar M¨obius transform corresponding to M will be denoted by M. The special case of the following lemma for the Cayley transform is particularly useful.

**Lemma 1.3.11.** Let H be a relation in H and let M be a 2×2 matrix as in (1.1.15), and assume that M is invertible. Then

$$(\mathcal{M}[H])^\* = \overline{\mathcal{M}}[H^\*].$$

In particular, for any <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>,

$$(\mathfrak{C}\_{\mu}[H])^{\*} = \mathfrak{C}\_{\mathfrak{T}}[H^{\*}].$$

Proof. First observe that (M[H])<sup>∗</sup> ⊂ M[H∗]. To see this let, {f,f- } ∈ (M[H])∗. Then, by definition, one has for all {h, h- } ∈ H

$$0 = (f', \alpha h + \beta h') - (f, \gamma h + \delta h') = (-\overline{\gamma}f + \overline{\alpha}f', h) - (\delta f - \overline{\beta}f', h'),$$

#### 1.3. Adjoint relations 37

which shows that

$$
\begin{pmatrix} \overline{\beta} & -\overline{\beta} \\ -\overline{\gamma} & \overline{\alpha} \end{pmatrix} \begin{pmatrix} f \\ f' \end{pmatrix} \in H^\*.
$$

Multiplication by M leads to

$$
\begin{pmatrix} f \\ f' \end{pmatrix} \in \begin{pmatrix} \overline{\alpha} & \overline{\beta} \\ \overline{\gamma} & \overline{\delta} \end{pmatrix} [H^\*] = \overline{\mathcal{M}}[H^\*],
$$

and so (M[H])<sup>∗</sup> ⊂ M[H∗].

To see the reverse inclusion M[H∗] ⊂ (M[H])∗, let {f,f- } ∈ M[H∗], so that for some {ϕ, ϕ- } ∈ H<sup>∗</sup>

$$\{f, f'\} = \{\overline{\alpha}\varphi + \overline{\beta}\varphi', \overline{\gamma}\varphi + \overline{\delta}\varphi'\}.$$

Then for all {h, h- } ∈ H one has that

$$(f', \alpha h + \beta h') - (f, \gamma h + \delta h') = (\overline{\alpha}\overline{\delta} - \overline{\beta}\overline{\gamma}) \left[ (\varphi', h) - (\varphi, h') \right] = 0.$$

This implies that M[H∗] ⊂ (M[H])∗.

The statement about the Cayley transform follows with the special choice <sup>α</sup> <sup>=</sup> <sup>−</sup>μ, <sup>γ</sup> <sup>=</sup> <sup>−</sup>μ, and <sup>β</sup> <sup>=</sup> <sup>δ</sup> = 1; cf. (1.1.25). -

The adjoint of the componentwise sum of linear relations is determined in the following proposition. Here the notation clos H is used for the closure of a relation <sup>H</sup>. Recall that if <sup>H</sup> and <sup>K</sup> are closed, then <sup>H</sup> <sup>+</sup> <sup>K</sup> is closed if and only if <sup>H</sup><sup>∗</sup> <sup>+</sup> <sup>K</sup><sup>∗</sup> is closed; cf. (1.3.5).

**Proposition 1.3.12.** Let H and K be relations from H to K. Then one has

$$\left(H \widehat{+} K\right)^{\*} = H^{\*} \cap K^{\*} \quad \text{and} \quad \text{clos}\left(H \widehat{+} K\right) = \left(H^{\*} \cap K^{\*}\right)^{\*}.\tag{1.3.11}$$

Proof. To verify the inclusion (<sup>H</sup> <sup>+</sup> <sup>K</sup>)<sup>∗</sup> <sup>⊂</sup> <sup>H</sup><sup>∗</sup> <sup>∩</sup> <sup>K</sup>∗, let {f,f- } ∈ (<sup>H</sup> <sup>+</sup> <sup>K</sup>)∗. Then for every {h, h- } ∈ H and {k, k- } ∈ K one has

$$(f', h+k) = (f, h'+k').$$

In particular, (f- , h)=(f,h- ) for all {h, h- } ∈ H and (f- , k)=(f, k- ) for all {k, k- } ∈ K. It follows that {f,f- } ∈ H<sup>∗</sup> ∩ K∗. Conversely, if {f,f- } ∈ H<sup>∗</sup> ∩ K∗, then

$$(f',h)=(f,h')\qquad\text{and}\qquad(f',k)=(f,k')$$

hold for all {h, h- } ∈ H and {k, k- } ∈ K. Adding these two identities one obtains (f- , h+k)=(f,h- +k- ) and hence {f,f- } ∈ (<sup>H</sup> <sup>+</sup> <sup>K</sup>)∗. This shows the first identity in (1.3.11). The second identity in (1.3.11) follows from the first identity by taking adjoints. -

The adjoint of an orthogonal sum of relations behaves like the orthogonal complement of a sum of orthogonal subspaces.

**Proposition 1.3.13.** Let H be a relation from H<sup>1</sup> to K1, let K be a relation from <sup>H</sup><sup>2</sup> to <sup>K</sup>2, and let <sup>H</sup> <sup>⊕</sup> <sup>K</sup> be their orthogonal sum. Then

$$(H \oplus \widehat{K})^\* = H^\* \oplus K^\*,$$

where the adjoint in each case is taken in the corresponding Hilbert spaces.

Let H be a relation from H to K. Recall that the closure of H is given by H∗∗ and that H is a closable operator if and only if mul H∗∗ = {0}. The orthogonal decomposition

$$\mathfrak{K} = \overline{\text{dom}} \, H^\* \oplus \text{mult} \, H^{\*\*}$$

implies a related range decomposition of the relation H itself.

**Theorem 1.3.14.** Let H be a relation from H to K and let Q be the orthogonal projection in K onto dom H∗. Then H admits the sum decomposition

$$H = QH + (I - Q)H,\tag{1.3.12}$$

where the relations QH and (I − Q)H have the following properties:


Proof. As to the decomposition (1.3.12) it is clear that H ⊂ QH + (I − Q)H. For the converse, consider {h, Qh- + (I − Q)h--} for some {h, h- }, {h, h--} ∈ H. Observe that {0, h- − h--} ∈ H, i.e., h- − h-- ∈ mul H ⊂ mul H∗∗ = ker Q. Hence, Q(h- − h--) = 0 and this leads to

$$\left\{ h, Qh' + (I - Q)h'' \right\} = \left\{ h, Q(h' - h'') + h'' \right\} = \left\{ h, h'' \right\} \in H.$$

Hence, also QH + (I − Q)H ⊂ H. Thus, (1.3.12) holds.

(i) By Corollary 1.3.4, it suffices to show that dom (QH)<sup>∗</sup> is dense in K. Observe that (QH)<sup>∗</sup> = H∗Q by Proposition 1.3.9, and hence

$$\operatorname{dom}\left(QH\right)^{\*} = \operatorname{dom}H^{\*}Q = \operatorname{dom}H^{\*} \oplus \ker Q.\tag{1.3.13}$$

To see the last identity in (1.3.13) first observe that h ∈ dom H∗Q if and only if Qh ∈ dom H∗. Hence, if h ∈ dom H∗Q, then h = Qh + (I − Q)h shows that h ∈ dom H<sup>∗</sup> ⊕ ker Q. Conversely, if h ∈ dom H<sup>∗</sup> ⊕ ker Q, then h = f + g, where f ∈ dom H<sup>∗</sup> and g ∈ ker Q. Hence, Qh = f ∈ dom H<sup>∗</sup> and thus h ∈ dom H∗Q. This shows the last identity in (1.3.13). Now observe that ker Q = (dom H∗)<sup>⊥</sup> and the identity (1.3.13) takes the form

$$\operatorname{dom}\left(QH\right)^{\*} = \operatorname{dom}H^{\*} \oplus (\operatorname{dom}H^{\*})^{\perp},$$

which implies that dom (QH)<sup>∗</sup> is dense in K.

#### 1.3. Adjoint relations 39

(ii) First it will be shown that

$$H^\*(I - Q) = \overline{\text{dom}} \, H^\* \times \text{mult} \, H^\*. \tag{1.3.14}$$

For the inclusion (⊂), let {h, h- } ∈ H∗(I −Q). Then {(I −Q)h, h- } ∈ H<sup>∗</sup> and since (I −Q)h ∈ (dom H∗)⊥, it follows that (I −Q)h = 0. Thus, h = Qh ∈ dom H<sup>∗</sup> and h- ∈ mul H∗. For the inclusion (⊃) in (1.3.14), let h ∈ dom H<sup>∗</sup> and h- ∈ mul H∗. Then (I − Q)h = 0 and hence {(I − Q)h, h- } = {0, h- } ∈ H∗. This implies that {h, h- } ∈ H∗(I − Q).

It follows from Proposition 1.3.9 that ((I −Q)H)<sup>∗</sup> = H∗(I −Q) and together with (1.3.14) one obtains

$$\begin{aligned} \text{clos}\left((I-Q)H\right) &= \left((I-Q)H\right)^{\ast\ast} \\ &= \left(H^\*(I-Q)\right)^{\ast} \\ &= \left(\overline{\text{dom}}\,H^\* \times \text{mult}\,H^\*\right)^{\ast} \\ &= (\text{mult}\,H^\*)^{\perp} \times (\overline{\text{dom}}\,H^\*)^{\perp} \\ &= \overline{\text{dom}}\,H \times \text{mult}\,H^{\ast\ast}; \end{aligned}$$

here (1.3.4) was used in the last but one step. This completes the proof of (ii). -

The sum decomposition in (1.3.12) is called the Lebesgue decomposition of the relation H into the regular part QH and the singular part (I − Q)H. The closure of the regular part QH is (the graph of) an operator, while the closure of the singular part (I − Q)H is a product of closed subspaces. This decomposition is the abstract variant of the Lebesgue decomposition of a measure.

The Lebesgue decomposition (1.3.12) for a relation H from H to K gives rise to a componentwise direct sum decomposition when mul H = mul H∗∗.

**Theorem 1.3.15.** Let H be a relation from H to K and let Q be the orthogonal projection in K onto dom H∗. Assume that

$$\operatorname{mult} H = \operatorname{mult} H^{\ast \ast},\tag{1.3.15}$$

so that K can be decomposed as K = dom H<sup>∗</sup> ⊕ mul H. Then QH ⊂ H and the relation H has the direct sum decomposition

$$H = QH \stackrel{\frown}{+} \{ \{0\} \times \text{mul } H \}, \tag{1.3.16}$$

where QH is a closable operator from H to K and {0} × mul H is a purely multivalued relation in mul H. Moreover, if the relation H is closed, then (1.3.15) is automatically satisfied and the operator QH is closed.

Proof. Note that any element {h, h- } ∈ H can be written as

$$\{h, h'\} = \{h, Qh'\} + \{0, (I - Q)h'\}.\tag{1.3.17}$$

Under the assumption (1.3.15) the orthogonal projection I − Q maps onto mul H and hence the relation H is contained in the right-hand side of (1.3.16). The identity (1.3.17) also implies QH ⊂ H and it follows that the right-hand side of (1.3.16) is contained in H. According to Theorem 1.3.14, QH is a closable operator and hence the sum in (1.3.16) is direct.

Now assume that the relation H is closed. In order to show that QH is closed, let {hn, h- <sup>n</sup>} ∈ H be a sequence such that {hn, Qh- <sup>n</sup>}→{ϕ, ψ}. Since QH ⊂ H, it follows that {ϕ, ψ} ∈ H. Moreover, Qh- <sup>n</sup> → ψ implies ψ = Qψ and hence {ϕ, ψ} <sup>=</sup> {ϕ, Qψ} ∈ QH. -

According to the above theorem, the closable operator QH acts as an operator part of the relation H in the direct sum decomposition (1.3.16). Note that

$$
\dim QH = \dim H \quad \text{and} \quad \text{ran } QH \subset \overline{\text{dom}} \, H^\*. \tag{1.3.18}
$$

The following theorem continues this line of thought in the special but useful situation where K = H, i.e., when H is a relation in H. Recall from Theorem 1.3.15 that if the relation H is closed then actually the operator QH is closed.

**Theorem 1.3.16.** Let H be a relation in H, let Q be the orthogonal projection onto dom H∗, and assume mul H = mul H∗∗. Suppose, in addition, that

$$\text{dom}\,H \subset \overline{\text{dom}}\,H^\* \quad \text{or, equivalently,} \quad \text{mult}\,H \subset \text{mult}\,H^\*. \tag{1.3.19}$$

Then the closable operator QH acts in the Hilbert space dom H<sup>∗</sup> and H has the orthogonal sum decomposition

$$H = QH \stackrel{\frown}{\oplus} \{ \{0\} \times \text{mul } H \}. \tag{1.3.20}$$

Moreover, QH is densely defined in dom H<sup>∗</sup> if and only if mul H = mul H∗.

Proof. Since the condition (1.3.15) is assumed, Theorem 1.3.15 applies, and so the direct sum decomposition (1.3.16) holds, where QH is a closable operator in H and {0} × mul H is a purely multivalued relation in mul H.

Now the equivalence in (1.3.19) will be shown. If dom H ⊂ dom H∗, it follows by taking orthogonal complements that mul H = mul H∗∗ ⊂ mul H∗. Conversely, if mul H = mul H∗∗ ⊂ mul H∗, it follows by taking orthogonal complements that dom H∗∗ ⊂ dom H<sup>∗</sup> and, in particular, dom H ⊂ dom H∗.

The conditions (1.3.19) and (1.3.18) imply that the closable operator QH acts in the Hilbert space dom H<sup>∗</sup> and hence the componentwise decomposition of H in (1.3.16) is actually a componentwise orthogonal sum, i.e., (1.3.20) holds. Furthermore, since dom QH = dom H by (1.3.18), it follows that the operator QH is densely defined in dom H<sup>∗</sup> if and only if dom H = dom H∗, which is equivalent to mul H = mul H∗. -

#### 1.3. Adjoint relations 41

The message of this theorem is that when mul H = mul H∗∗, the Hilbert space decomposes in H = dom H<sup>∗</sup> ⊕ mul H, and the regular part of the relation H serves as a not necessarily densely defined (orthogonal) operator part of H in the Hilbert space dom H∗. In the rest of this text the following notation will be used:

Hop = dom H∗, Hmul = mul H∗∗ = mul H,

and, similarly,

$$H\_{\rm op} = QH, \quad H\_{\rm mul} = \{0\} \times \text{mult} \, H.$$

With these notations one has

$$\mathfrak{H} = \mathfrak{H}\_{\mathrm{op}} \oplus \mathfrak{H}\_{\mathrm{mul}}, \quad H = H\_{\mathrm{op}} \oplus H\_{\mathrm{mul}};$$

cf. Theorem 1.4.11, Theorem 1.5.1, and Theorem 1.6.12. The relation Hmul is purely multivalued and self-adjoint in the Hilbert space Hmul by (1.3.4), that is,

$$H\_{\text{mul}} = (H\_{\text{mul}})^\*.$$

From Proposition 1.3.13 one then obtains

$$H^\* = (H\_{\text{op}})^\* \oplus H\_{\text{mul}}$$

and hence the adjoint (Hop )<sup>∗</sup> of Hop in Hop satisfies

$$(H\_{\mathrm{op}})^\* = H^\* \cap (\mathfrak{H}\_{\mathrm{op}} \times \mathfrak{H}\_{\mathrm{op}})^\*$$

and its multivalued part in Hop is mul H<sup>∗</sup> ∩ Hop . Note that

$$\operatorname{mult} H^\* = \operatorname{mult} (H\_{\operatorname{op}})^\* \oplus \operatorname{mult} H\_{\operatorname{mult}} = \left( \operatorname{mult} H^\* \cap \mathfrak{H}\_{\operatorname{op}} \right) \oplus \operatorname{mult} H.$$

This section ends by introducing the Moore–Penrose inverse of a relation; cf. Appendix D.

**Definition 1.3.17.** Let H be a relation from H to K. Then the Moore–Penrose inverse H(−1) of H from K to H is defined as the relation

$$H^{(-1)} = P\_{(\ker H)^\perp} H^{-1} = P\_{(\text{mul } H^{-1})^\perp} H^{-1}.$$

In fact, the Moore–Penrose inverse H(−1) of H is an operator. To see this, let {0, k} ∈ <sup>H</sup>(−1). Then {0, h} ∈ <sup>H</sup>−<sup>1</sup> and {h, k} ∈ <sup>P</sup>(ker <sup>H</sup>)<sup>⊥</sup> for some <sup>h</sup> <sup>∈</sup> <sup>H</sup>. Since k = P(ker <sup>H</sup>)<sup>⊥</sup> h and h ∈ ker H, it follows that k = 0. Furthermore, if H is closed, then Theorem 1.3.15 applied to H−<sup>1</sup> (in which case Q = P(ker <sup>H</sup>)<sup>⊥</sup> ) shows that

$$H^{-1} = P\_{\text{(ker }H)^\perp} H^{-1} \stackrel{\frown}{\rightarrow} \{0\} \times \ker H\}.$$

Hence, the Moore–Penrose inverse H(−1) coincides with the operator part of H−1. Moreover, if H is closed then ran H is closed if and only if H(−1) takes ran H boundedly into (ker <sup>H</sup>)⊥, in which case <sup>H</sup>(−1) <sup>∈</sup> **<sup>B</sup>**(ran H,(ker <sup>H</sup>)⊥). Note that for H ∈ **B**(H, K) the Moore–Penrose inverse coincides with the usual Moore–Penrose inverse, see Appendix D.

**Example 1.3.18.** Let T be a closed relation in H and assume that λ ∈ ρ(T). Then <sup>H</sup> = (<sup>T</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) with ker <sup>H</sup> = mul <sup>T</sup> and ran <sup>H</sup> = dom <sup>T</sup>, so that the Moore–Penrose inverse of H is the operator given by

$$H^{(-1)} = T\_{\rm op} - \lambda,$$

which maps dom T into (mul T)⊥.

## **1.4 Symmetric relations**

Symmetric relations are the building stones of this text. Here the basic properties of such relations will be developed. The special case of self-adjoint relations will be treated in more detail in the next section.

**Definition 1.4.1.** A relation S in H is called symmetric if S ⊂ S∗, and self-adjoint if S = S∗. A symmetric relation S in H is said to be maximal symmetric if every symmetric extension S of S in H satisfies S-= S.

It follows immediately from the definition of the adjoint relation that a relation S is symmetric if and only if

$$(f',g) = (f,g') \qquad \text{for all} \quad \{f,f'\}, \ \{g,g'\} \in S. \tag{1.4.1}$$

The following lemma provides a slightly stronger statement and an easily verifiable condition for the symmetry of a relation.

**Lemma 1.4.2.** A relation S in H is symmetric if and only if

$$\operatorname{Im}\left(f',f\right) = 0 \qquad \text{for all} \quad \{f,f'\} \in S. \tag{1.4.2}$$

Proof. If S ⊂ S∗, then (1.4.1) implies (1.4.2). Conversely, assume that (1.4.2) holds. Let {f,f- }, {g, g- } ∈ <sup>S</sup> and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Then {<sup>f</sup> <sup>+</sup> λg, f- + λg- } ∈ S and it follows from

$$\left(f' + \lambda g', f + \lambda g\right) = (f', f) + \overline{\lambda}(f', g) + \lambda(g', f) + |\lambda|^2(g', g),$$

and the assumption Im (f- + λg- , f + λg) = 0 that

$$\operatorname{Im}\left|\bar{\lambda}(f',g) + \lambda(g',f)\right| = 0.$$

By putting λ = 1 and λ = i, respectively, one obtains

$$\operatorname{Im}\left(f',g\right) = -\operatorname{Im}\left(g',f\right), \quad \operatorname{Re}\left(f',g\right) = \operatorname{Re}\left(g',f\right),$$

which leads to the equality (f- , g) = (g- , f)=(f,g- ). Hence, the relation S is symmetric by (1.4.1). -

#### 1.4. Symmetric relations 43

If S is symmetric, then clearly also S ⊂ S∗, since S<sup>∗</sup> is closed. Hence, the closure S is also symmetric. In particular, if S is maximal symmetric, then S is closed. Thus, every self-adjoint relation is maximal symmetric.

**Lemma 1.4.3.** Let S be a symmetric relation in H. Then mul S ⊂ mul S∗. If S is maximal symmetric, then mul S = mul S∗.

Proof. Let S be symmetric. Then it follows directly from Definition 1.4.1 that mul S ⊂ mul S∗.

Now assume S is maximal symmetric. It suffices to show mul S<sup>∗</sup> ⊂ mul S. If k ∈ mul S<sup>∗</sup> = (dom S)⊥, then

$$S \stackrel{\cdot}{+} \text{span}\left\{0, k\right\} = \left\{ \left\{ h, h' + k \right\} \, : \, \{ h, h' \} \in S \right\}.$$

is a symmetric extension of S, as Im (h- +k, h) = Im (h- , h) = 0 for all {h, h- } ∈ S. Since S is maximal symmetric, it follows that {0, k} ∈ S and k ∈ mul S. Thus, mul <sup>S</sup><sup>∗</sup> <sup>⊂</sup> mul <sup>S</sup>. -

As an example consider the relation S defined by S = {0} ×N, where N ⊂ H is a linear subspace. It follows from (1.3.4) that S<sup>∗</sup> = N<sup>⊥</sup> × H and S∗∗ = {0} × N. Hence, S is symmetric, while

$$\operatorname{mult} S = \mathfrak{N}, \quad \operatorname{mult} \overline{S} = \mathfrak{N}, \quad \operatorname{mult} S^\* = \mathfrak{H},$$

which shows that if S is closed the inclusion mul S ⊂ mul S<sup>∗</sup> in Lemma 1.4.3 is in general strict. Moreover, in the present example S is self-adjoint if and only if N = H. If S is maximal symmetric then according to Lemma 1.4.3 one has N = H, so that S is self-adjoint.

In the rest of this text the interest will often be in extensions that are closed; in particular, in relations H that are self-adjoint extensions of a given symmetric relation S,

$$S \subset H = H^\* \subset S^\*.$$

Observe that H is a self-adjoint extension of S if and only if H is a self-adjoint extension of the closure S. In that sense it will often be assumed without loss of generality that S is closed; recall that γ(S) = γ(S).

**Proposition 1.4.4.** Let <sup>S</sup> be a symmetric relation in <sup>H</sup>. Then <sup>C</sup> \ <sup>R</sup> is contained in γ(S) and, in particular, the defect nλ(S) = dim (ran (S − λ))<sup>⊥</sup> is constant for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. Furthermore, <sup>σ</sup>p(S) <sup>∪</sup> <sup>σ</sup>c(S) <sup>⊂</sup> <sup>R</sup> and

$$\left\|\left(S-\lambda\right)^{-1}h\right\| \leq \frac{1}{\left|\text{Im}\,\lambda\right|}\left\|h\right\|\tag{1.4.3}$$

for all <sup>h</sup> <sup>∈</sup> dom (<sup>S</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> = ran (<sup>S</sup> <sup>−</sup> <sup>λ</sup>) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

Proof. Let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and {f,f- } ∈ S, so that {f- <sup>−</sup> λf, f} ∈ (<sup>S</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup>. As <sup>S</sup> is symmetric, one has Im (f- , f) = 0 by Lemma 1.4.2, and hence

$$0 \le |\operatorname{Im}\lambda|(f,f) = |\operatorname{Im}\left(f' - \lambda f, f\right)| \le ||f' - \lambda f|| ||f||$$

and for f = 0 this implies

$$0 \le \|\mathrm{Im}\,\lambda\| \|f\| \le \|f' - \lambda f\|.$$

Therefore, (<sup>S</sup> <sup>−</sup>λ)−<sup>1</sup> is an operator and (1.4.3) holds for all <sup>h</sup> <sup>∈</sup> dom (<sup>S</sup> <sup>−</sup>λ)−<sup>1</sup> and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. This also shows <sup>C</sup> \ <sup>R</sup> <sup>⊂</sup> <sup>γ</sup>(S) and it follows from Theorem 1.2.5 that the defect nλ(S) is constant on C<sup>+</sup> and C−. It is clear that the point spectrum <sup>σ</sup>p(S) is contained in <sup>R</sup>, and since (<sup>S</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is bounded for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, also the continuous spectrum σc(S) is contained in R. -

The defect numbers n±(S) of a symmetric relation S are defined as

$$n\_{\pm}(S) := \dim \left( \text{ran} \left( S \mp i \right) \right)^{\perp} = \dim \left( \ker \left( S^\* \pm i \right) \right), \tag{1.4.4}$$

where according to Proposition 1.4.4 the point ±i in (1.4.4) can be replaced by any <sup>λ</sup> <sup>∈</sup> <sup>C</sup>±.

In the case where the symmetric relation S is bounded from below in the sense of the next definition it follows that <sup>γ</sup>(S) <sup>∩</sup> <sup>R</sup> <sup>=</sup> <sup>∅</sup> and thus <sup>γ</sup>(S) consists of one component only; cf. Proposition 1.4.6. In particular, the defect numbers n+(S) and n−(S) coincide in this case.

**Definition 1.4.5.** Let S be a relation in H. Then S is said to be bounded from below if there exists a number <sup>η</sup> <sup>∈</sup> <sup>R</sup> such that

$$\eta(f',f) \ge \eta(f,f) \quad \text{for all} \quad \{f,f'\} \in S. \tag{1.4.5}$$

The lower bound <sup>m</sup>(S) of <sup>S</sup> is the largest number <sup>η</sup> <sup>∈</sup> <sup>R</sup> for which (1.4.5) holds:

$$m(S) = \inf \left\{ \frac{(f',f)}{(f,f)} : \ \{f,f'\} \in S, \ f \neq 0 \right\}.$$

The inequality (1.4.5) will be written as S ≥ ηI, which will be further abbreviated to S ≥ η. If S ≥ 0, then S is called nonnegative (and S may have a positive lower bound).

For a relation S that is bounded from below also the terminology semibounded relation will be used. If S is bounded from below, then it follows directly from Lemma 1.4.2 that <sup>S</sup> is symmetric. Moreover, if (1.4.5) is satisfied for all <sup>η</sup> <sup>∈</sup> <sup>R</sup>, then η f <sup>2</sup> <sup>≤</sup>f- f for {f,f- } ∈ <sup>S</sup> and <sup>η</sup> <sup>∈</sup> <sup>R</sup> shows that dom <sup>S</sup> <sup>=</sup> {0} and hence S is a purely multivalued relation.

**Proposition 1.4.6.** Let S be a symmetric relation in H which is bounded from below with lower bound <sup>m</sup>(S) <sup>∈</sup> <sup>R</sup>. Then <sup>C</sup>\[m(S), <sup>∞</sup>) is contained in <sup>γ</sup>(S) and the defect <sup>n</sup>λ(S) = dim (ran (<sup>S</sup> <sup>−</sup> <sup>λ</sup>))<sup>⊥</sup> is constant for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [m(S), <sup>∞</sup>). Furthermore, σp(S) ∪ σc(S) ⊂ [m(S), ∞) and

$$\|\left(\left(S-\nu\right)^{-1}h\right\|\leq\frac{1}{m(S)-\nu}\|h\|\tag{1.4.6}$$

for all <sup>h</sup> <sup>∈</sup> dom (<sup>S</sup> <sup>−</sup> <sup>ν</sup>)−<sup>1</sup> and ν<m(S).

Proof. For {f,f- } ∈ S and ν<m(S) the assumption (f-− m(S)f,f) ≥ 0 implies

$$\begin{aligned} (m(S) - \nu)(f, f) &\le \left( f' - m(S)f + (m(S) - \nu)f, f \right) = (f' - \nu f, f), \\ &\le \| f' - \nu f \| \| f \|. \end{aligned}$$

Hence, if f = 0 it follows that (m(S) − ν) f ≤ f- − νf holds for all {f,f- } ∈ S and ν<m(S). This shows that (S−ν)−<sup>1</sup> is an operator and (1.4.6) holds. Recalling Proposition 1.4.4, one has <sup>C</sup> \ [m(S), <sup>∞</sup>) <sup>⊂</sup> <sup>γ</sup>(S), and Theorem 1.2.5 implies that the defect <sup>n</sup>λ(S) is constant on <sup>C</sup> \ [m(S), <sup>∞</sup>), and <sup>σ</sup>p(S) <sup>∪</sup> <sup>σ</sup>c(S) <sup>⊂</sup> [m(S), <sup>∞</sup>) holds. -

**Lemma 1.4.7.** Let S be a closed symmetric relation in H and let λ ∈ γ(S). Then ran (S<sup>∗</sup> − λ) = H.

Proof. Let <sup>λ</sup> <sup>∈</sup> <sup>γ</sup>(S), then also <sup>λ</sup> <sup>∈</sup> <sup>γ</sup>(S) (for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> this follows from Proposition 1.4.4). This implies that ran (S − λ) is closed according to Lemma 1.2.2. Hence, ran (S<sup>∗</sup> − λ) is closed by Theorem 1.3.5. Moreover, λ ∈ γ(S) implies ker (S − λ) = {0} and therefore

$$\left(\text{ran}\left(S^\*-\lambda\right)\right)^\perp = \text{ker}\left(S-\overline{\lambda}\right) = \{0\},$$

that is, ran (S<sup>∗</sup> <sup>−</sup> <sup>λ</sup>) is dense in <sup>H</sup>. It follows that ran (S<sup>∗</sup> <sup>−</sup> <sup>λ</sup>) = <sup>H</sup>. -

In the next proposition the Cayley transform and the inverse Cayley transform of symmetric relations are considered.

**Proposition 1.4.8.** Let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and let <sup>C</sup><sup>μ</sup> and <sup>F</sup><sup>μ</sup> be the Cayley transform and inverse Cayley transform in Definition 1.1.13. Let S and V be relations in H such that V = Cμ[S] or, equivalently, S = Fμ[V ]. Then the following statements hold:


Proof. (i) Let <sup>S</sup> and <sup>V</sup> be relations such that <sup>V</sup> <sup>=</sup> <sup>C</sup>μ[S] with <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then (1.1.23), (1.1.25), and Lemma 1.3.11 show that

$$V^{-1} = \mathbb{C}\_{\mathbb{Z}}[S] \quad \text{and} \quad V^\* = \mathbb{C}\_{\mathbb{Z}}[S^\*].$$

These equalities and an application of the inverse Cayley transform give

$$V^{-1} \subset V^\* \quad \Leftrightarrow \quad \mathfrak{C}\_{\mathbb{F}}[S] \subset \mathfrak{C}\_{\mathbb{F}}[S^\*] \quad \Leftrightarrow \quad S \subset S^\*.$$

Now Lemma 1.3.6 shows that S is a symmetric relation if and only if V is an isometric operator. Moreover, by (1.1.18), S is closed if and only if V <sup>−</sup><sup>1</sup> = Cμ[S] is closed, which completes the proof of (i).

(ii) Let S be maximal symmetric and let V = Cμ[S]. Assume that V is an isometric extension of V in H. Then S- = Fμ[V - ] is a symmetric extension of S and hence S- = S. This implies V - = V and hence dom V = H or ran V = H. The converse statement is proved by the same argument. -

It follows from Proposition 1.4.8 (ii) and (1.1.24) that a symmetric relation S in H is maximal symmetric if and only if

$$\text{ran}\,(S-\mu) = \mathfrak{H} \tag{1.4.7}$$

for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> or for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−.

**Definition 1.4.9.** Let <sup>S</sup> be a symmetric relation in <sup>H</sup> and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. The spaces

$$\mathfrak{M}\_{\lambda}(S^\*) := \ker \left( S^\* - \lambda \right) \quad \text{and} \quad \bar{\mathfrak{M}}\_{\lambda}(S^\*) = \left\{ \{f\_{\lambda}, \lambda f\_{\lambda}\} : f\_{\lambda} \in \mathfrak{M}\_{\lambda}(S^\*) \right\}.$$

are called defect subspaces of <sup>S</sup> at the point <sup>λ</sup> <sup>∈</sup> <sup>C</sup>.

Note that the defect numbers n±(S) in (1.4.4) satisfy

$$n\_{\pm}(S) = \dim \mathfrak{N}\_{\mp i}(S^\*) = \dim \mathfrak{N}\_{\lambda}(S^\*), \qquad \lambda \in \mathbb{C}^{\mp}.$$

Since the adjoint relation S<sup>∗</sup> is closed by Proposition 1.3.2, the defect subspaces <sup>N</sup>λ(S∗) <sup>⊂</sup> dom <sup>S</sup><sup>∗</sup> and <sup>N</sup> <sup>λ</sup>(S∗) <sup>⊂</sup> <sup>S</sup><sup>∗</sup> are closed subspaces of <sup>H</sup> and <sup>H</sup>2, respectively. The notation in Definition 1.4.9 will be used throughout the text. Besides the defect subspaces <sup>N</sup>λ(S∗) and <sup>N</sup> <sup>λ</sup>(S∗) also the spaces

$$\mathfrak{M}\_{\lambda}(S) := \ker \left( S - \lambda \right) \quad \text{and} \quad \dot{\mathfrak{M}}\_{\lambda}(S) := \left\{ \{f\_{\lambda}, \lambda f\_{\lambda}\} : f\_{\lambda} \in \mathfrak{N}\_{\lambda}(S) \right\}$$

for a symmetric relation S will be used. Moreover, let

$$\mathfrak{M}\_{\infty}(S) := \operatorname{mult} S \quad \text{and} \quad \dot{\mathfrak{N}}\_{\infty}(S) := \left\{ \{0, f'\} : f' \in \mathfrak{N}\_{\infty}(S) \right\}.$$

**Lemma 1.4.10.** Let S be a closed symmetric relation in H and let H ⊂ S<sup>∗</sup> be a closed extension of S such that ρ(H) = ∅. Then for λ, μ ∈ ρ(H)

$$I + (\lambda - \mu)(H - \lambda)^{-1} \tag{1.4.8}$$

is boundedly invertible with inverse <sup>I</sup> + (<sup>μ</sup> <sup>−</sup> <sup>λ</sup>)(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−1. For <sup>μ</sup> <sup>∈</sup> <sup>ρ</sup>(H) the mapping (1.4.8) is holomorphic in λ. Moreover, for λ, μ ∈ ρ(H) the operator in (1.4.8) maps Nμ(S∗) bijectively onto Nλ(S∗).

#### 1.4. Symmetric relations 47

Proof. Let λ, μ ∈ ρ(H). Then the first assertion follows from Corollary 1.2.8. The holomorphy of <sup>λ</sup> → <sup>I</sup> + (<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)(<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> follows from the holomorphy of the resolvent; cf. Corollary 1.2.7. As to the defect spaces, it is first verified that for f<sup>μ</sup> ∈ Nμ(S∗) one has

$$f\_{\lambda} := \left( I + (\lambda - \mu)(H - \lambda)^{-1} \right) f\_{\mu} \in \mathfrak{R}\_{\lambda}(S^\*) = \text{ran} \left( S - \overline{\lambda} \right)^{\perp}. \tag{1.4.9}$$

To see this, let {g, g- } ∈ S and consider

$$\begin{aligned} (f\_\lambda, g' - \overline{\lambda}g) &= \left( (I + (\lambda - \mu)(H - \lambda)^{-1})f\_\mu, g' - \overline{\lambda}g \right) \\ &= \left( f\_\mu, \left( I + (\overline{\lambda} - \overline{\mu})(H^\* - \overline{\lambda})^{-1} \right)(g' - \overline{\lambda}g) \right). \end{aligned}$$

Since λ ∈ ρ(H∗) according to Proposition 1.3.10 and S ⊂ H<sup>∗</sup> it follows that (H<sup>∗</sup> <sup>−</sup> <sup>λ</sup>)−1(g-− λg) = g, and so

$$(f\_\lambda, g' - \overline{\lambda}g) = (f\_\mu, g' - \overline{\lambda}g + (\overline{\lambda} - \overline{\mu})g) = (f\_\mu, g' - \overline{\mu}g) = 0.$$

Hence, (1.4.9) is clear. It follows that the operator in (1.4.8) maps Nμ(S∗) to Nλ(S∗). The same reasoning with λ and μ interchanged shows that the map is in fact onto. -

Closed symmetric relations can be written as orthogonal sums of closed symmetric operators and self-adjoint purely multivalued linear relations. This is a straightforward consequence of Theorem 1.3.16.

**Theorem 1.4.11.** Let S be a closed symmetric relation in H. Decompose H as H = Hop ⊕ Hmul , Hop := (mul S)<sup>⊥</sup> and Hmul := mul S, and denote the orthogonal projection from H onto Hop by Pop . Then S is the direct orthogonal sum <sup>S</sup>op <sup>⊕</sup> <sup>S</sup>mul of the closed symmetric operator

$$S\_{\mathrm{op}} = \left\{ \{ f, P\_{\mathrm{op}} f' \} : \{ f, f' \} \in S \right\}.$$

in Hop and the self-adjoint purely multivalued relation

$$S\_{\rm mul} = \left\{ \{0, f'\} : f' \in \mathfrak{H}\_{\rm mult} \right\} = \left\{ \{0, (I - P\_{\rm op}) f'\} : \{f, f'\} \in S \right\}$$

in Hmul . Moreover, the operator Sop is densely defined in Hop if and only if mul S = mul S∗. If the relation S is maximal symmetric, then Sop is a densely defined maximal symmetric operator in Hop .

Proof. By assumption the relation S is closed and symmetric, which implies that mul S = mul S∗∗ and dom S ⊂ dom S∗. Thus, Theorem 1.3.16 applies and yields the indicated decomposition and the criterion for the denseness of Sop follows.

If S is maximal symmetric, then mul S = mul S<sup>∗</sup> by Lemma 1.4.3. Hence, in this case the operator Sop is densely defined. Assume that S<sup>1</sup> is a symmetric extension of <sup>S</sup>op in <sup>H</sup>op . Then <sup>S</sup><sup>1</sup> <sup>⊕</sup> <sup>S</sup>mul is a symmetric extension of <sup>S</sup> and hence coincides with S. This implies Sop = S<sup>1</sup> and therefore Sop is a maximal symmetric operator in Hop . -

## **1.5 Self-adjoint relations**

For self-adjoint relations there is always an orthogonal decomposition into a selfadjoint operator and a self-adjoint purely multivalued part. This reduction allows one to apply the spectral theory for self-adjoint operators in the present context, see also Chapter 3. This section contains a brief introduction and a number of consequences of this approach; in particular, nonnegative and semibounded selfadjoint relations will be considered.

The following reduction result is a specialization of Theorem 1.4.11 for selfadjoint relations.

**Theorem 1.5.1.** Let H be a self-adjoint relation in H. Decompose the space H as H = Hop ⊕ H∞, where

$$\mathfrak{H}\_{\mathrm{op}} := \overline{\operatorname{dom}} H = (\operatorname{mult} H)^\perp \quad \text{and} \quad \mathfrak{H}\_{\mathrm{mult}} := \operatorname{mult} H,$$

and denote the orthogonal projection from H onto Hop by Pop . Then H is the direct orthogonal sum <sup>H</sup>op <sup>⊕</sup> <sup>H</sup>mul of the (densely defined ) self-adjoint operator

$$H\_{\mathrm{op}} = \left\{ \{ f, P\_{\mathrm{op}} f' \} : \{ f, f' \} \in H \right\}.$$

in Hop and the self-adjoint purely multivalued relation

$$H\_{\text{mul}} = \left\{ \{0, f'\} : f' \in \mathfrak{H}\_{\text{mul}} \right\} = \left\{ \{0, (I - P\_{\text{op}})f'\} : \{f, f'\} \in H \right\}$$

in Hmul .

Observe that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the resolvent of <sup>H</sup> in Theorem 1.5.1 admits the matrix representation

$$(H - \lambda)^{-1} = \begin{pmatrix} P\_{\rm op} \left( H - \lambda \right)^{-1} \iota\_{\rm op} & 0 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} (H\_{\rm op} - \lambda)^{-1} & 0 \\ 0 & 0 \end{pmatrix} \tag{1.5.1}$$

with respect to the space decomposition H = Hop ⊕ Hmul . Here ιop denotes the canonical embedding of Hop in H.

In the following it is explained how the spectral theory and the functional calculus for self-adjoint operators extend via Theorem 1.5.1 to self-adjoint relations. First of all it is clear that the finite (real) spectrum of a self-adjoint relation <sup>H</sup> <sup>=</sup> <sup>H</sup>op <sup>⊕</sup> <sup>H</sup>mul is the same as that of the self-adjoint operator part <sup>H</sup>op . Note that ρ(Hmul) = C and that σ(H−<sup>1</sup> mul) consists only of the eigenvalue 0. The essential spectrum σess(H) and discrete spectrum σd(H) of a self-adjoint relation H are defined as the essential spectrum and discrete spectrum of its operator part Hop , respectively. Recall that the discrete spectrum consists of all isolated eigenvalues of finite multiplicity and the essential spectrum is the remaining part of the spectrum; it consists of the continuous spectrum, eigenvalues embedded in the continuous spectrum, and isolated eigenvalues of infinite multiplicity. It is useful to observe that λ ∈ σd(H) if and only if dim ker (H − λ) < ∞ and ran (H − λ) is closed.

#### 1.5. Self-adjoint relations 49

The spectral measure Eop (·) of the self-adjoint operator Hop is defined for the Borel sets in R with orthogonal projections in Hop as values, and the corresponding spectral function is defined as t → Eop((−∞, t)). Then one has

$$\begin{aligned} H\_{\text{op}}f &= \int\_{\mathbb{R}} t \, dE\_{\text{op}}\,(t)f, \\ \text{dom}\,H\_{\text{op}} &= \left\{ f \in \mathfrak{H}\_{\text{op}} \,:\, \int\_{\mathbb{R}} t^2 \, d(E\_{\text{op}}\,(t)f, f) < \infty \right\}. \end{aligned}$$

Furthermore, for a bounded measurable function <sup>h</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> the bounded operator h(Hop ) ∈ **B**(Hop ) is defined via the functional calculus for self-adjoint operators in Hop :

$$h(H\_{\rm op}) = \int\_{\mathbb{R}} h(t) \, dE\_{\rm op} \,(t). \tag{1.5.2}$$

In particular, the spectral calculus leads to the formula

$$\left(H\_{\rm op} - \lambda\right)^{-1} = \int\_{\mathbb{R}} \frac{1}{t - \lambda} dE\_{\rm op}\left(t\right), \quad \lambda \in \rho(H\_{\rm op}).\tag{1.5.3}$$

The spectral projection Eop ((a, b)) can be obtained with the help of the resolvent of Hop and Stone's formula

$$\lim\_{\varepsilon \to +0} \lim\_{\delta \to +0} \frac{1}{2\pi i} \int\_{a+\varepsilon}^{b-\varepsilon} \left( \left( H\_{\text{op}} - (t+i\delta) \right)^{-1} - \left( H\_{\text{op}} - (t-i\delta) \right)^{-1} \right) dt,\tag{1.5.4}$$

where the limits exist in the strong sense.

The spectral measure of a self-adjoint relation will be defined on R as the orthogonal sum of the spectral measure of Hop and the zero operator in Hmul .

**Definition 1.5.2.** Let <sup>H</sup> <sup>=</sup> <sup>H</sup>op <sup>⊕</sup> <sup>H</sup>mul be a self-adjoint relation in the Hilbert space H = Hop ⊕Hmul and denote the spectral measure of the self-adjoint operator Hop in Hop by Eop (·). Then the spectral measure E(·) of H is defined as

$$E(\cdot) = \begin{pmatrix} E\_{\text{op}}(\cdot) & 0\\ 0 & 0 \end{pmatrix}$$

with respect to the decomposition H = Hop ⊕ Hmul .

Now the functional calculus for the self-adjoint operator Hop yields the functional calculus for the self-adjoint relation <sup>H</sup> <sup>=</sup> <sup>H</sup>op <sup>⊕</sup> <sup>H</sup>mul . More precisely, for a bounded measurable function <sup>h</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> one defines

$$h(H) = \int\_{\mathbb{R}} h(t) \, dE(t)$$

in accordance with (1.5.2). It follows directly from Definition 1.5.2 and (1.5.2) that

$$h(H) = h(H\_{\rm op}) \stackrel{\cdot}{\oplus} O\_{\rm mul} \, H \tag{1.5.5}$$

is an everywhere defined bounded operator in H. In particular, for the resolvent of H in (1.5.1) one has

$$(H - \lambda)^{-1} = \int\_{\mathbb{R}} \frac{1}{t - \lambda} dE(t), \quad \lambda \in \rho(H);\tag{1.5.6}$$

cf. (1.5.3). Note that the spectral projection E((a, b)) of H is also given by Stone's formula

$$\lim\_{\varepsilon \to +0} \lim\_{\delta \to +0} \frac{1}{2\pi i} \int\_{a+\varepsilon}^{b-\varepsilon} \left( \left( H - (t + i\delta) \right)^{-1} - \left( H - (t - i\delta) \right)^{-1} \right) dt,\tag{1.5.7}$$

which again follows from the decomposition of the resolvent of H in (1.5.1); as in (1.5.4), the limits are understood in the strong sense. A proof of Stone's formula (in the weak sense) can also be found in Example A.1.4.

The next lemma on the strong convergence follows from the properties of the functional calculus of self-adjoint operators; cf. [649, Theorem VIII.5] and (1.5.5). It will be used in Chapter 3.

**Lemma 1.5.3.** Let <sup>h</sup><sup>n</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a sequence of bounded measurable functions which converge pointwise to h such that hn <sup>∞</sup> is bounded. Then

$$\lim\_{n \to \infty} h\_n(H\_{\text{op}})f = h(H\_{\text{op}})f, \quad f \in \mathfrak{H}\_{\text{op}},$$

and

$$\lim\_{n \to \infty} h\_n(H)g = h(H)g, \qquad g \in \mathfrak{H}.$$

The next proposition on the Cayley transform of self-adjoint relations complements Proposition 1.4.8.

**Proposition 1.5.4.** Let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and let <sup>C</sup><sup>μ</sup> and <sup>F</sup><sup>μ</sup> be the Cayley transform and inverse Cayley transform defined in (1.1.23). Let S and V be relations such that V = Cμ[S] or, equivalently, S = Fμ[V ]. Then S is a self-adjoint relation if and only if V is a unitary operator.

Proof. As in the proof of Proposition 1.4.8 one obtains for <sup>V</sup> <sup>=</sup> <sup>C</sup>μ[S] and <sup>μ</sup> <sup>∈</sup> <sup>C</sup>\<sup>R</sup> that

$$V^{-1} = V^\* \quad \Leftrightarrow \quad \mathbb{C}\_{\mathbb{Z}}[S] = \mathbb{C}\_{\mathbb{Z}}[S^\*] \quad \Leftrightarrow \quad S = S^\*,$$

and now the assertion follows from Lemma 1.3.6. -

The following theorem is useful when one needs to prove that a given relation is self-adjoint. It is often easy to check that a relation is symmetric and hence it is convenient to have equivalent conditions for a symmetric relation to be self-adjoint available.

**Theorem 1.5.5.** Let S be a closed symmetric relation in H. Then the following statements are equivalent:

$$\square$$


If the closed symmetric relation <sup>S</sup> is bounded from below by <sup>m</sup>(S) <sup>∈</sup> <sup>R</sup> or, more generally, <sup>γ</sup>(S)∩<sup>R</sup> <sup>=</sup> <sup>∅</sup>, then λ, μ in (ii) and (iii) can also be chosen in (−∞, m(S)) or <sup>γ</sup>(S) <sup>∩</sup> <sup>R</sup>, respectively, such that <sup>λ</sup> <sup>=</sup> <sup>μ</sup>. In the case <sup>S</sup> <sup>≥</sup> <sup>m</sup>(S) item (iv) can be replaced by <sup>C</sup> \ [m(S), <sup>∞</sup>) <sup>⊂</sup> <sup>ρ</sup>(S).

Proof. (i) ⇒ (ii) From Proposition 1.4.4 it follows that ker (S − λ) = {0} for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and as <sup>S</sup> <sup>=</sup> <sup>S</sup><sup>∗</sup> one concludes (ii).

(ii) ⇔ (iii) follows from the identity (ran (S − λ))<sup>⊥</sup> = ker (S<sup>∗</sup> − λ) and the fact that ran (<sup>S</sup> <sup>−</sup> <sup>λ</sup>) is closed for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> by Proposition 1.4.4 and Lemma 1.2.2. Note also that ran (<sup>S</sup> <sup>−</sup> <sup>λ</sup>) = <sup>H</sup> for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>±</sup> implies ran (<sup>S</sup> <sup>−</sup> <sup>λ</sup>) = <sup>H</sup> for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>±</sup> by Theorem 1.2.5 and <sup>C</sup><sup>±</sup> <sup>⊂</sup> <sup>γ</sup>(S).

(iii) <sup>⇒</sup> (iv) Since <sup>C</sup> \ <sup>R</sup> <sup>⊂</sup> <sup>γ</sup>(S) by Proposition 1.4.4 and ran (S−λ) = <sup>H</sup>, <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is closed, it follows that (<sup>S</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Now Lemma 1.2.4 implies <sup>C</sup> \ <sup>R</sup> <sup>⊂</sup> <sup>ρ</sup>(S).

(iv) ⇒ (i) It suffices to show S<sup>∗</sup> ⊂ S. For this let {f,f- } ∈ <sup>S</sup><sup>∗</sup> and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. As λ ∈ ρ(S), there exists {g, g- } ∈ S such that

$$f' - \lambda f = g' - \lambda g.$$

Hence, {f − g, λ(f − g)} = {f,f- }−{g, g- } ∈ S<sup>∗</sup> and

$$f - g \in \ker\left(S^\* - \lambda\right) = \left(\text{ran}\left(S - \overline{\lambda}\right)\right)^\perp.$$

Since with λ ∈ ρ(S) also λ ∈ ρ(S), it follows that f = g and hence f- = g- , that is, {f,f- } ∈ S.

If <sup>S</sup> is bounded from below with lower bound <sup>m</sup>(S), then <sup>C</sup> \ [m(S), <sup>∞</sup>) <sup>⊂</sup> <sup>γ</sup>(S) by Proposition 1.4.6. Hence, if λ = μ<m(S), then ran (S − λ) = H implies ran (<sup>S</sup> <sup>−</sup> <sup>λ</sup>) = <sup>H</sup> for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [m(S), <sup>∞</sup>) by Theorem 1.2.5. It is also clear that for λ = μ<m(S) one has (ran (S − λ))<sup>⊥</sup> = ker (S<sup>∗</sup> − λ), and that ran (S − λ) is closed. This shows the equivalence of (ii) and (iii), and the argument remains true in the more general situation <sup>γ</sup>(S) <sup>∩</sup> <sup>R</sup> <sup>=</sup> <sup>∅</sup>. As above one concludes in the case <sup>S</sup> <sup>≥</sup> <sup>m</sup>(S) from <sup>C</sup> \ [m(S), <sup>∞</sup>) <sup>⊂</sup> <sup>γ</sup>(S) that (iii) implies <sup>C</sup> \ [m(S), <sup>∞</sup>) <sup>⊂</sup> <sup>ρ</sup>(S). -

Note also that if a closed symmetric relation S is self-adjoint and bounded from below with lower bound <sup>m</sup>(S) <sup>∈</sup> <sup>R</sup>, then one has <sup>σ</sup>(S) <sup>⊂</sup> [m(S), <sup>∞</sup>) by Theorem 1.5.5 (iv). In fact, one verifies with the help of the spectral measure of S or of its operator part Sop that

$$m(S) = \min \sigma(S).$$

In some cases it is useful to have the following variant of the equivalence of (i) and (iii) in Theorem 1.5.5 in which S is not assumed to be closed.

**Proposition 1.5.6.** Let S be a symmetric relation in H. Then S is self-adjoint if and only if ran (<sup>S</sup> <sup>−</sup> <sup>λ</sup>) = <sup>H</sup> = ran (<sup>S</sup> <sup>−</sup> <sup>μ</sup>) for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−. If <sup>S</sup> is semibounded with lower bound <sup>m</sup>(S) or, more generally, <sup>γ</sup>(S) <sup>∩</sup> <sup>R</sup> <sup>=</sup> <sup>∅</sup>, then <sup>λ</sup> and <sup>μ</sup> can also be chosen in (−∞, m(S)) or <sup>γ</sup>(S) <sup>∩</sup> <sup>R</sup>, respectively, such that λ = μ.

Proof. Note that by Lemma 1.2.2 the condition ran (S −λ) = H for some λ ∈ γ(S) implies that S is closed. Now the assertions follow from Theorem 1.5.5. -

Let S be a symmetric relation in H. Then it is easy to see that

<sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>x</sup>(S∗), x <sup>∈</sup> <sup>R</sup>, and <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗)

are also symmetric relations in H. The next lemma provides a necessary and sufficient condition for the self-adjointness of these relations, which applies, in particular, when <sup>γ</sup>(S) <sup>∩</sup> <sup>R</sup> <sup>=</sup> <sup>∅</sup>.

**Lemma 1.5.7.** Let <sup>S</sup> be a symmetric relation in <sup>H</sup> and let <sup>x</sup> <sup>∈</sup> <sup>R</sup>.

(i) The symmetric relation <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>x</sup>(S∗) is self-adjoint if and only if

ran (S − x) = ran (S − x) ∩ ran (S<sup>∗</sup> − x).

In particular, if ran (<sup>S</sup> <sup>−</sup> <sup>x</sup>) is closed, then <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>x</sup>(S∗) is self-adjoint.

(ii) The symmetric relation <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗) is self-adjoint if and only if

$$\operatorname{dom} S = \overline{\operatorname{dom}} S \cap \operatorname{dom} S^\*. \tag{1.5.8}$$

In particular, if dom <sup>S</sup> is closed, then <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗) is self-adjoint.

Proof. (i) This assertion is a consequence of item (ii). In fact, consider the symmetric relation <sup>T</sup> = (<sup>S</sup> <sup>−</sup> <sup>x</sup>)−1, note that <sup>T</sup> <sup>∗</sup> = (S<sup>∗</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> and mul <sup>T</sup> <sup>∗</sup> <sup>=</sup> <sup>N</sup>x(S∗), and observe that the following statements (a)–(c) are equivalent


Now (ii) shows that (a)–(c) are equivalent to


which implies (i).

(ii) Observe first that the relation <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗) = <sup>S</sup> + ( {0}×mul <sup>S</sup>∗) is symmetric and by Proposition 1.3.12 and (1.3.4) its adjoint is given by

$$\left(\left(S \stackrel{\frown}{\rightarrow} \hat{\mathfrak{R}}\_{\infty}(S^{\*})\right)^{\*} = \left(S \stackrel{\frown}{\rightarrow} (\{0\} \times \text{mul } S^{\*})\right)^{\*} = S^{\*} \cap \left(\overline{\text{dom}} \, S \times \mathfrak{H}\right). \tag{1.5.9}$$

Now assume that <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗) is self-adjoint. Then

$$S \stackrel{\frown}{+} \left( \{ 0 \} \times \text{mul } S^\* \right) = S^\* \cap \left( \overline{\text{dom}} \, S \times \mathfrak{H} \right),$$

which implies (1.5.8). Conversely, assume that (1.5.8) holds. Since <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗) is symmetric, it suffices to show that

$$S^\* \cap \left(\overline{\text{dom}}\, S \times \mathfrak{H}\right) \subset S \xrightarrow{\sim} \left(\{0\} \times \text{mult}\, S^\*\right);\tag{1.5.10}$$

cf. (1.5.9). Let {f,f- } belong to the left-hand side of (1.5.10). Then it follows from (1.5.8) that f ∈ dom S, so that {f,g} ∈ S ⊂ S<sup>∗</sup> for some g ∈ H. Therefore, {0, f- − g} = {f,f- }−{f,g} ∈ S∗, so that f-− g ∈ mul S<sup>∗</sup> and

$$\{f, f'\} = \{f, g\} + \{0, f' - g\} \in S \stackrel{\frown}{+} \{\{0\} \times \text{mult} \, S^\*\}.$$

This shows that (1.5.10) holds. Hence, <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗) is self-adjoint. -

The rest of this section is devoted to self-adjoint relations that are semibounded; cf. Chapter 5. In particular, the square root of a nonnegative self-adjoint relation is constructed. The material presented here will play an essential role later in the text.

**Lemma 1.5.8.** Let H be a closed relation from H to K. Then the relations H∗H and HH<sup>∗</sup> are nonnegative and self-adjoint in H and K, respectively.

Proof. In order to see that H∗H ≥ 0, let {h, h- } ∈ H∗H. Then {h, l} ∈ H and {l, h- } ∈ H<sup>∗</sup> for some l ∈ K, so that

$$(h',h) = (l,l) \ge 0.$$

Hence, H∗H is nonnegative and, in particular, symmetric.

In order to show that H∗H is self-adjoint in H it suffices to verify the identity ran (H∗H + I) = H; cf. Proposition 1.5.6. For this let f ∈ H and note that H×K = H⊕H×KH⊥, as H is closed. It follows that there is a unique decomposition

$$\{f, 0\} = \{h, h'\} + \{k, k'\}, \quad \{h, h'\} \in H, \quad \{k, k'\} \in H^\perp.$$

$$\square$$

Hence,

$$f = h + k, \quad 0 = h' + k',$$

which leads to {−k, h- } = {−k, −k- } ∈ H<sup>⊥</sup> and {h- , k} = J{−k, h- } ∈ JH<sup>⊥</sup> = H∗, where J is the flip-flop operator in (1.3.1). Thus, {h, h- } ∈ H and {h- , k} ∈ H<sup>∗</sup> imply {h, k} ∈ H∗H and

$$\{h, f\} = \{h, k + h\} \in H^\*H + I,$$

so that f ∈ ran (H∗H + I). Thus, ran (H∗H + I) = H.

Applying what was established above to H<sup>∗</sup> (instead of H) and taking into account Proposition 1.3.2 (i) one concludes that HH<sup>∗</sup> = H∗∗H<sup>∗</sup> is also nonnegative and self-adjoint. -

In the next theorem it will be shown that a nonnegative self-adjoint relation possesses a unique nonnegative square root.

**Theorem 1.5.9.** Let H be a nonnegative self-adjoint relation in H. Then there exists a unique nonnegative self-adjoint relation K in H, denoted by K = H <sup>1</sup> <sup>2</sup> , such that K<sup>2</sup> = H. Moreover, H <sup>1</sup> <sup>2</sup> has the representation

$$H^{\frac{1}{2}} = (H\_{\rm op})^{\frac{1}{2}} \oplus H\_{\rm mul} \,. \tag{1.5.11}$$

Proof. If H is a self-adjoint relation in H, then by Theorem 1.5.1 one has the orthogonal decomposition

$$H = H\_{\text{op}} \not\equiv H\_{\text{mul}}.\tag{1.5.12}$$

Since H is assumed to be nonnegative, it follows that Hop is a nonnegative selfadjoint operator in Hop which possesses a unique nonnegative square root (Hop ) 1 2 in Hop . Now clearly K defined by the right-hand side of (1.5.11) is a nonnegative self-adjoint relation with mul K = mul H. Since dom Hmul = {0}, it is clear that (Hmul)<sup>2</sup> = Hmul . It follows from (1.1.9) that

$$K^2 = ((H\_{\text{op}})^{\frac{1}{2}})^2 \oplus (H\_{\text{mul}})^2 = H\_{\text{op}} \oplus H\_{\text{mul}} = H.$$

In order to show uniqueness, let K be a nonnegative self-adjoint relation in <sup>H</sup> such that <sup>K</sup><sup>2</sup> <sup>=</sup> <sup>H</sup>. Then mul <sup>K</sup> = mul <sup>H</sup>. In fact, the inclusion mul <sup>K</sup> <sup>⊂</sup> mul <sup>H</sup> is clear as {0, ϕ} ∈ <sup>K</sup> and {0, <sup>0</sup>} ∈ <sup>K</sup> show {0, ϕ} ∈ <sup>K</sup><sup>2</sup> <sup>=</sup> <sup>H</sup>. To show that mul <sup>H</sup> <sup>⊂</sup> mul <sup>K</sup>, let {0, ϕ} ∈ <sup>H</sup> <sup>=</sup> <sup>K</sup>2. Then {0, ψ} ∈ <sup>K</sup> and {ψ, ϕ} ∈ <sup>K</sup> for some ψ ∈ H. As K is self-adjoint, (ψ, ψ) = (0, ϕ) = 0, that is, ψ = 0 and {0, ϕ} ∈ K, as needed. Therefore,

$$\operatorname{mult} K = \operatorname{mult} H \quad \text{and} \quad \overleftrightarrow{\operatorname{dom}} K = \overleftrightarrow{\operatorname{dom}} H.$$

Decompose the self-adjoint relation K as in Theorem 1.5.1

$$K = K\_{\rm op} \stackrel{\cdot}{\oplus} K\_{\rm mul} \, , \tag{1.5.13}$$

where Kop is a nonnegative self-adjoint operator in dom K = dom H. Furthermore, observe from (1.5.13) and (1.1.9) that

$$H = K^2 = (K\_{\rm op})^2 \oplus (K\_{\rm mul})^2 = (K\_{\rm op})^2 \oplus K\_{\rm mul}.\tag{1.5.14}$$

Moreover, comparing (1.5.14) with (1.5.12) shows that

$$H\_{\rm op} = (K\_{\rm op})^2, \quad H\_{\rm mul} = K\_{\rm mul},$$

and since the square root of a nonnegative self-adjoint operator is uniquely determined, it follows that Kop = (Hop ) 1 <sup>2</sup> . -

Let H be a nonnegative self-adjoint relation in H. Since Theorem 1.5.9 implies that mul H <sup>1</sup> <sup>2</sup> = mul H, it follows that

$$(H^{\frac{1}{2}})\_{\mathrm{op}} = (H\_{\mathrm{op}})^{\frac{1}{2}},$$

so that the notation <sup>H</sup> <sup>1</sup> 2 op is unambiguous.

In the next lemma the square root of H − x for a semibounded relation H is considered.

**Lemma 1.5.10.** Let H be a semibounded self-adjoint relation in H with lower bound η = m(H) and let x ≤ η. Then the following statements hold:

(i) dom Hop is a core for the operator (Hop − x) 1 <sup>2</sup> , that is, the closure of the restriction

$$(H\_{\rm op} - x)^{\frac{1}{2}} \restriction \text{dom}\, H\_{\rm op} \tag{1.5.15}$$

coincides with (Hop − x) 1 2 ;

$$\text{(ii)}\ \text{dom}\left(H - x\right)^{\frac{1}{2}} = \text{dom}\left(H - \eta\right)^{\frac{1}{2}};$$

(iii) for all h ∈ dom (H − x) 1 <sup>2</sup> = dom (H − η) 1 2 ,

$$\|(H\_{\rm op} - x)^{\frac{1}{2}}h\|^2 + x\|h\|^2 = \|(H\_{\rm op} - \eta)^{\frac{1}{2}}h\|^2 + \eta\|h\|^2. \tag{1.5.16}$$

Proof. (i) First observe that Hop is a self-adjoint operator in Hop with the same lower bound as H, and hence Hop − x, x ≤ η, is a nonnegative operator in Hop . It suffices to show that the graph of the operator in (1.5.15) is dense in the graph of the operator (Hop − x) 1 <sup>2</sup> . Therefore, assume that for some k ∈ dom (Hop − x) 1 2 and all h ∈ dom Hop one has

$$0 = (h,k) + \left( (H\_{\rm op} - x)^{\frac{1}{2}}h, (H\_{\rm op} - x)^{\frac{1}{2}}k \right) = (h,k) + \left( (H\_{\rm op} - x)h, k \right).$$

Then k is orthogonal to ran ((Hop − x) + I) and as x − 1 < η, it follows that ran ((Hop − x) + I) = Hop . Hence, k = 0. This implies (i).

(ii) & (iii) Note first that for h ∈ dom H = dom Hop the identity

$$((H\_{\text{op}} - x)h, h) + x(h, h) = ((H\_{\text{op}} - \eta)h, h) + \eta(h, h)$$

can be rewritten in the form

$$\|(H\_{\rm op} - x)^{\frac{1}{2}}h\|^2 + x\|h\|^2 = \|(H\_{\rm op} - \eta)^{\frac{1}{2}}h\|^2 + \eta\|h\|^2, \quad h \in \text{dom}\, H,\tag{1.5.17}$$

which coincides with (1.5.16) on dom H.

In order to show the inclusion (⊂) in (ii), assume that

$$h \in \text{dom}\,(H - x)^{\frac{1}{2}} = \text{dom}\,(H\_{\text{op}} - x)^{\frac{1}{2}}.$$

According to (i), there exists a sequence h<sup>n</sup> ∈ dom Hop such that

$$h\_n \to h \quad \text{and} \quad (H\_{\text{op}} - x)^{\frac{1}{2}} h\_n \to (H\_{\text{op}} - x)^{\frac{1}{2}} h. \tag{1.5.18}$$

Therefore, it follows from (1.5.17) that (Hop − η) 1 <sup>2</sup> h<sup>n</sup> is a Cauchy sequence in Hop . Since h<sup>n</sup> → h and the operator (Hop − η) 1 <sup>2</sup> is closed, one concludes that h ∈ dom (Hop − η) 1 <sup>2</sup> = dom (H − η) 1 <sup>2</sup> . The other inclusion in (ii) is shown in the same way.

It remains to verify (1.5.16). Choose h ∈ dom (H − x) 1 <sup>2</sup> = dom (H − η) 1 <sup>2</sup> and use (i) to get a sequence h<sup>n</sup> ∈ dom Hop as in (1.5.18). Then (1.5.17) shows that (Hop − η) 1 <sup>2</sup> h<sup>n</sup> is a Cauchy sequence in Hop , and as (Hop − η) 1 <sup>2</sup> is closed one has

$$(H\_{\rm op} - \eta)^{\frac{1}{2}} h\_n \to (H\_{\rm op} - \eta)^{\frac{1}{2}} h, \quad n \to \infty. \tag{1.5.19}$$

From (1.5.17) applied with hn, together with (1.5.18) and (1.5.19) one obtains (1.5.16). -

In the next proposition two semibounded self-adjoint relations are considered. It turns out that the inclusion of the square root domains implies a strong norm inequality for the operator parts.

**Proposition 1.5.11.** Let H<sup>1</sup> and H<sup>2</sup> be semibounded self-adjoint relations in H with lower bounds m(H1) and m(H2) and let x < min {m(H1), m(H2)}. Then the inclusion

$$\text{dom}\,(H\_2 - x)^{\frac{1}{2}} \subset \text{dom}\,(H\_1 - x)^{\frac{1}{2}},\tag{1.5.20}$$

together with the inequality

$$\|(H\_{1,\text{op}} - x)^{\frac{1}{2}}\varphi\| \le \rho \|(H\_{2,\text{op}} - x)^{\frac{1}{2}}\varphi\|, \quad \varphi \in \text{dom}\,(H\_2 - x)^{\frac{1}{2}},\tag{1.5.21}$$

where ρ > 0, are equivalent to the inequality

$$
\rho^2 (H\_2 - x)^{-1} \le \rho^2 (H\_1 - x)^{-1}.\tag{1.5.22}
$$

Moreover, if the inclusion (1.5.20) holds, then there exists ρ > 0 for which the inequality (1.5.21) is satisfied.

Proof. Let x < min {m(H1), m(H2)}, so that

$$A = (H\_2 - x)^{-1} \quad \text{and} \quad B = (H\_1 - x)^{-1}$$

are nonnegative operators in **B**(H). Note that their square roots are given by A1 <sup>2</sup> = (H<sup>2</sup> <sup>−</sup> <sup>x</sup>)<sup>−</sup> <sup>1</sup> <sup>2</sup> and B <sup>1</sup> <sup>2</sup> = (H<sup>1</sup> <sup>−</sup> <sup>x</sup>)<sup>−</sup> <sup>1</sup> <sup>2</sup> . Hence, the Moore–Penrose inverses of A1 <sup>2</sup> and B <sup>1</sup> <sup>2</sup> are given by

$$A^{( -\frac{1}{2})} = (H\_{2, \text{op}} - x)^{\frac{1}{2}}, \quad B^{( -\frac{1}{2})} = (H\_{1, \text{op}} - x)^{\frac{1}{2}},$$

cf. Definition 1.3.17 and Example 1.3.18, where it was used that

$$
\ker A^{\frac{1}{2}} = \ker A = \operatorname{mult} H\_2 \quad \text{and} \quad \ker B^{\frac{1}{2}} = \ker B = \operatorname{mult} H\_1.
$$

In terms of the operators A and B, the statements in (1.5.20) and (1.5.21) mean that

$$\operatorname{ran} A^{\frac{1}{2}} \subset \operatorname{ran} B^{\frac{1}{2}} \quad \text{and} \quad \|B^{( - \frac{1}{2})} \varphi\| \le \rho \|A^{( - \frac{1}{2})} \varphi\|, \quad \varphi \in \operatorname{ran} A^{\frac{1}{2}},\tag{1.5.23}$$

while the statement in (1.5.22) means that

$$A \le \rho^2 B. \tag{1.5.24}$$

The equivalence of (1.5.23) and (1.5.24) follows from Proposition D.8. Moreover, Proposition D.8 also shows that (1.5.20) implies (1.5.21) for some ρ > 0, which completes the proof. -

The next lemma characterizes the domain of the square root of a nonnegative self-adjoint relation.

**Lemma 1.5.12.** Let H be a nonnegative self-adjoint relation in H and let ϕ ∈ H. Then the function

$$x \mapsto \left( (H^{-1} - x)^{-1} \varphi, \varphi \right), \quad x \in ( -\infty, 0),$$

is nondecreasing, and

$$\lim\_{x \uparrow 0} \left( (H^{-1} - x)^{-1} \varphi, \varphi \right) = \begin{cases} \|H\_{\text{op}}^{\frac{1}{2}} \varphi\|^2, & \varphi \in \text{dom}\, H^{\frac{1}{2}},\\ \infty, & \text{otherwise.} \end{cases} \tag{1.5.25}$$

Proof. Since H is a nonnegative self-adjoint relation, so is H−1, and each resolvent operator in the identity

$$(H^{-1} - x)^{-1} = -\frac{1}{x} - \frac{1}{x^2} \left( H - \frac{1}{x} \right)^{-1}, \quad x < 0,\tag{1.5.26}$$

belongs to **<sup>B</sup>**(H) by Corollary 1.1.12 and (1.2.14). Since ker (<sup>H</sup> <sup>−</sup>1/x)−<sup>1</sup> = mul <sup>H</sup>, it follows from (1.5.26) that for each x < 0 and ϕ ∈ H

$$\begin{split} \left( (H^{-1} - x)^{-1} \varphi, \varphi \right) &= -\frac{1}{x} \| (I - P\_{\text{op}}) \varphi \| ^2 - \frac{1}{x} \| P\_{\text{op}} \varphi \| ^2 \\ &- \frac{1}{x^2} \left( \left( H - \frac{1}{x} \right)^{-1} P\_{\text{op}} \varphi, P\_{\text{op}} \varphi \right), \end{split} \tag{1.5.27}$$

where Pop is the orthogonal projection from H onto dom H. Let E(·) be the spectral measure of H, so that Hop = <sup>∞</sup> <sup>0</sup> t dEop (t). Then the formula (1.5.27) can be rewritten for each x < 0 and ϕ ∈ H as

$$\begin{split} \left\{ (H^{-1} - x)^{-1} \varphi, \varphi \right\} &= -\frac{1}{x} \left\| (I - P\_{\text{op}}) \varphi \right\|^{2} \\ &- \int\_{0}^{\infty} \frac{t}{tx - 1} \, d(E\_{\text{op}} \, (t) P\_{\text{op}} \, \varphi, P\_{\text{op}} \, \varphi) . \end{split} \tag{1.5.28}$$

In particular, (1.5.28) shows that the function in (1.5.25) is nondecreasing for x ∈ (−∞, 0).

Furthermore, by the nonnegativity of the terms in (1.5.28), the limit as x ↑ 0 of the left-hand side in (1.5.28) is finite if and only if the limit of each of the terms on the right-hand side of (1.5.28) is finite. The first limit is finite if and only if (I − Pop )ϕ = 0, i.e., Popϕ = ϕ and hence ϕ ∈ dom H. By the monotone convergence theorem, the limit of the second term is equal to <sup>∞</sup> <sup>0</sup> t d(Eop (t)ϕ, ϕ), which is finite and equal to

$$\|H\_{\mathrm{op}}^{\frac{1}{2}}\varphi\|^2$$

if and only if <sup>ϕ</sup> <sup>∈</sup> dom <sup>H</sup> <sup>1</sup>

## **1.6 Maximal dissipative and accumulative relations**

In this section the basic properties of dissipative and accumulative relations are discussed. Of special interest are dissipative and accumulative relations which are maximal with this property. Such relations admit an orthogonal decomposition into a maximal dissipative or maximal accumulative operator and a self-adjoint purely multivalued part.

**Definition 1.6.1.** A relation H in H is said to be dissipative (accumulative) if Im (f- , f) ≥ 0 ( Im (f- , f) ≤ 0) for all {f,f- } ∈ H. The relation H is said to be maximal dissipative (maximal accumulative) if every dissipative (accumulative) extension H of H in H satisfies H-= H.

It is easy to see that if a relation H in H dissipative or accumulative, then so is the closure H. Hence, maximal dissipative or maximal accumulative relations are automatically closed.

<sup>2</sup> . -

Note that a linear relation H is dissipative (maximal dissipative) if and only if the relation −H is accumulative (maximal accumulative). Thus, it suffices to state results for dissipative relations; the corresponding results for accumulative relations follow immediately.

**Lemma 1.6.2.** Let H be a dissipative relation in H. Then mul H ⊂ mul H∗. If H is maximal dissipative, then mul H = mul H∗.

Proof. Let k ∈ mul H and let {h, h- } ∈ <sup>H</sup>. Since {0, λk} ∈ <sup>H</sup> for every <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, one has {h, h-+ λk} ∈ H. Since H is dissipative one has

$$\operatorname{Im}\left(h',h\right) + \operatorname{Im}\left(\lambda(k,h)\right) = \operatorname{Im}\left(h' + \lambda k, h\right) \ge 0, \qquad \lambda \in \mathbb{C}.$$

In this inequality <sup>λ</sup> <sup>∈</sup> <sup>C</sup> is arbitrary and hence one concludes (k, h) = 0. Therefore, mul H ⊂ (dom H)<sup>⊥</sup> = mul H∗.

If H is dissipative and k ∈ mul H<sup>∗</sup> = (dom H)⊥, then it follows that

$$H \dot{+} \text{span}\left\{0, k\right\}$$

is a dissipative extension of H. Hence, if H is maximal dissipative, then {0, k} ∈ H and <sup>k</sup> <sup>∈</sup> mul <sup>H</sup>. -

The next proposition is the analog of Proposition 1.4.4. Its proof is almost the same, and depends on the estimate

$$0 \le -\text{Im}\,\lambda(f,f) \le \text{Im}\,(f'-\lambda f,f) \le ||f'-\lambda f|| ||f||,$$

which is valid for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> and {f,f- } ∈ H such that Im (f- , f) ≥ 0.

**Proposition 1.6.3.** Let H be a dissipative relation in H. Then C<sup>−</sup> is contained in γ(H) and, in particular, the defect nλ(H) = dim (ran (H − λ)⊥) is constant for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. Furthermore,

$$\|(H - \lambda)^{-1}h\| \le \frac{1}{-\text{Im}\,\lambda} \|h\|.$$

for all <sup>h</sup> <sup>∈</sup> dom (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> = ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−.

For dissipative relations Theorem 1.5.5 reads as follows.

**Theorem 1.6.4.** Let H be a closed dissipative relation in H. Then the following statements are equivalent:


Proof. (ii) ⇔ (iii) ⇒ (iv) follow with the same arguments as in the proof of Theorem 1.5.5.

(i) <sup>⇒</sup> (iii) As <sup>H</sup> is closed, also ran (<sup>H</sup> <sup>−</sup>λ) is closed for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> by Lemma 1.2.2 since <sup>C</sup><sup>−</sup> <sup>⊂</sup> <sup>γ</sup>(H) by Proposition 1.6.3. Now assume that ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) is a proper subspace of <sup>H</sup> for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> and define the relation <sup>H</sup>in H by

$$H' := \left\{ \{f + f\_{\overline{\lambda}}, f' + \overline{\lambda}f\_{\overline{\lambda}}\} : \{f, f'\} \in H, \, f\_{\overline{\lambda}} \in \mathfrak{N}\_{\overline{\lambda}}(H^\*)\right\},$$

where Nλ(H∗) = ker (H<sup>∗</sup> − λ) = (ran (H − λ))⊥. Then clearly H ⊂ H- , and as

$$\begin{aligned} \text{ran}\,(H'-\lambda) &= \left\{ f' - \lambda f + (\overline{\lambda} - \lambda) f\_{\overline{\lambda}} : \{ f, f' \} \in H, \, f\_{\overline{\lambda}} \in \mathfrak{N}\_{\overline{\lambda}}(H^\*) \right\} \\ &= \text{ran}\,(H - \lambda) \oplus \mathfrak{N}\_{\overline{\lambda}}(H^\*) = \mathfrak{H} \end{aligned}$$

and Nλ(H∗) = {0}, it follows that H is a proper extension of H in H, H = H- . Since f<sup>λ</sup> ∈ Nλ(H∗) implies {fλ, λfλ} ∈ H∗, one sees that

$$(f', f\_{\overline{\lambda}}) = (f, \lambda f\_{\overline{\lambda}}) = \lambda (f, f\_{\overline{\lambda}})$$

for all {f,f- } ∈ H. Hence,

$$\begin{aligned} \left(f' + \overline{\lambda}f\_{\overline{\lambda}}, f + f\_{\overline{\lambda}}\right) &= \left(f', f\right) + \left(f', f\_{\overline{\lambda}}\right) + \overline{\lambda}\left(f\_{\overline{\lambda}}, f\right) + \overline{\lambda}\left(f\_{\overline{\lambda}}, f\_{\overline{\lambda}}\right) \\ &= \left(f', f\right) + 2\text{Re}\left(\lambda\left(f, f\_{\overline{\lambda}}\right)\right) + \overline{\lambda}\left(f\_{\overline{\lambda}}, f\_{\overline{\lambda}}\right) \end{aligned}$$

and from the assumptions that <sup>H</sup> is dissipative and <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> one concludes that

$$\operatorname{Im}\left(f' + \overline{\lambda}f\_{\overline{\lambda}}, f + f\_{\overline{\lambda}}\right) = \operatorname{Im}\left(f', f\right) + \operatorname{Im}\overline{\lambda}(f\_{\overline{\lambda}}, f\_{\overline{\lambda}}) \ge 0,$$

i.e., H is a proper dissipative extension of H in H. Thus, H is not maximal dissipative. This proves (iii) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−.

(iv) ⇒ (i) Suppose that H is a dissipative extension of H, and let {f,f- } ∈ H- and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. As <sup>C</sup><sup>−</sup> <sup>⊂</sup> <sup>ρ</sup>(H), there exists {g, g- } ∈ H such that

$$f' - \lambda f = g' - \lambda g.$$

This implies {f − g, f- − g- } ∈ H and hence f − g ∈ ker (H- − λ). As H is dissipative and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−, it follows from Proposition 1.6.3 that <sup>f</sup> <sup>=</sup> <sup>g</sup>, which also gives f- = g- . This shows {f,f- } ∈ H and hence H- = H. Therefore, H is maximal dissipative. -

**Corollary 1.6.5.** Let H be a relation in H. Then the following statements hold:


Proof. (i) Assume that <sup>H</sup> is maximal dissipative, which implies <sup>C</sup><sup>−</sup> <sup>⊂</sup> <sup>ρ</sup>(H). Let {f,f- } ∈ <sup>H</sup>. Then for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> the identity

$$\begin{aligned} \operatorname{Im} \left( (H - \lambda)^{-1} (f' - \lambda f), f' - \lambda f \right) &= \operatorname{Im} \left( f, f' - \lambda f \right) \\ &= -\operatorname{Im} \left( f' - \lambda f, f \right) \\ &= -\operatorname{Im} \left( f', f \right) + \operatorname{Im} \lambda (f, f) \end{aligned} \tag{1.6.1}$$

shows that (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) is accumulative.

(ii) Assume that <sup>H</sup> is closed, <sup>C</sup><sup>−</sup> <sup>⊂</sup> <sup>ρ</sup>(H), and (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) is accumulative for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. Then (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1(f- − λf) = f for all {f,f- } ∈ <sup>H</sup> and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−, and hence the identity (1.6.1) shows that

$$\operatorname{Im}\left(f',f\right) \ge \operatorname{Im}\lambda(f,f) \quad \text{for all} \quad \lambda \in \mathbb{C}^-\text{.}$$

This implies that <sup>H</sup> is dissipative and, since <sup>H</sup> is closed and <sup>C</sup><sup>−</sup> <sup>⊂</sup> <sup>ρ</sup>(H), it follows from Theorem 1.6.4 that H is maximal dissipative. -

In the next proposition the Cayley transform and the inverse Cayley transform of accumulative, dissipative, maximal accumulative, and maximal dissipative relations are considered. This proposition is the counterpart of Proposition 1.4.8 and Proposition 1.5.4.

**Proposition 1.6.6.** Let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and let <sup>C</sup><sup>μ</sup> and <sup>F</sup><sup>μ</sup> be the Cayley transform and inverse Cayley transform in Definition 1.1.13. Let H and V be relations in H such that V = Cμ[H] or, equivalently, H = Fμ[V ]. Then the following statements hold:


Proof. (i) Let <sup>H</sup> be dissipative or accumulative and for <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> define

$$V = \mathcal{C}\_{\mu}[H] = \left\{ \{ f' - \mu f, f' - \overline{\mu} f \} : \{ f, f' \} \in H \right\}.\tag{1.6.2}$$

Then a straightforward computation shows that

$$\begin{aligned} \|f' - \overline{\mu}f\|^2 - \|f' - \mu f\|^2 &= 2\text{Re}\left( (\overline{\mu} - \mu)(f', f) \right) \\ &= 4(\text{Im}\,\mu)\,\text{Im}\,(f', f) \end{aligned}$$

for all {f,f- } ∈ H. Hence, f- − μf ≤ f- − μf when <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> and <sup>H</sup> is dissipative, or when <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and <sup>H</sup> is accumulative. This implies that <sup>V</sup> in (1.6.2) is a contractive operator.

Conversely, let <sup>V</sup> be a contractive operator and for <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> define

$$H = \mathcal{F}\_{\mu}[V] = \left\{ \{k - k', \overline{\mu}k - \mu k'\} : \{k, k'\} \in V \right\}.\tag{1.6.3}$$

Then a computation shows

$$
\left(\overline{\mu}k - \mu k', k - k'\right) = \overline{\mu}(k, k) - 2\text{Re}\left(\mu(k', k)\right) + \mu(k', k'),
$$

and consequently

$$\operatorname{Im}\left(\overline{\mu}k - \mu k', k - k'\right) = -\operatorname{Im}\mu\left(\|k\|^2 - \|k'\|^2\right). \tag{1.6.4}$$

Since k- ≤ k for {k, k- } ∈ V , it follows from (1.6.4) that H in (1.6.3) is dissipative for <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> and accumulative for <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+.

(ii) If H is maximal dissipative or maximal accumulative, then ran (H − μ) = H for <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> or <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+, respectively; cf. Theorem 1.6.4. Hence, <sup>V</sup> in (1.6.2) satisfies dom V = H.

Conversely, if dom V = H, then (1.6.3) implies ran (H − μ) = H, and Theorem 1.6.4 then implies that H is maximal dissipative or maximal accumulative. -

The following result is sometimes useful.

**Proposition 1.6.7.** Let H be a closed relation in H. Then H is maximal dissipative if and only if H<sup>∗</sup> is maximal accumulative.

Proof. Let H be a maximal dissipative relation. Then Proposition 1.6.6 shows that for <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> the Cayley transform <sup>C</sup>μ[H] is a contractive operator defined on <sup>H</sup>. But then also the adjoint is a contractive operator defined on all of H. Observe that by Lemma 1.3.11

$$(\mathbb{C}\_{\mu}[H])^{\*} = \mathbb{C}\_{\overline{\mu}}[H^{\*}],$$

which, since <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+, implies by Proposition 1.6.6 that <sup>H</sup><sup>∗</sup> is maximal accumulative. The converse is proved in the same way. -

The next proposition is of a slightly different nature than the previous results and complements Lemma 1.5.7. It shows that a closed symmetric relation always admits maximal dissipative and maximal accumulative extensions.

**Proposition 1.6.8.** Let <sup>S</sup> be a closed symmetric relation in <sup>H</sup>, let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and let

$$H = S \stackrel{\cdot}{+} \mathfrak{N}\_{\lambda}(S^\*). \tag{1.6.5}$$

Then H is a closed extension of S and the sum is direct. Moreover,


Proof. Since the eigenvalues of S are real, the sum in (1.6.5) is direct. In order to verify (i) and (ii) note that a typical element in H is of the form {f +fλ, f- +λfλ} with {f,f- } ∈ S and {fλ, λfλ} ∈ S∗. Therefore, as S is symmetric,

$$(f', f\_\lambda) = (f, \lambda f\_\lambda).$$

Hence, the identity

$$\left(f' + \lambda f\_{\lambda}, f + f\_{\lambda}\right) = \left(f', f\right) + 2\text{Re}\left(\lambda(f\_{\lambda}, f)\right) + \lambda(f\_{\lambda}, f\_{\lambda})$$

together with Im (f- , f) = 0 shows that

$$\operatorname{Im}\left(f' + \lambda f\_{\lambda}, f + f\_{\lambda}\right) = (\operatorname{Im}\lambda)(f\_{\lambda}, f\_{\lambda}).$$

Therefore, <sup>H</sup> is dissipative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and accumulative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. Finally, observe that (1.6.5) implies

$$
\tan\left(H - \overline{\lambda}\right) = \tan\left(S - \overline{\lambda}\right) \oplus \ker\left(S^\* - \lambda\right) = \mathfrak{H},
$$

and Theorem 1.6.4 shows that <sup>H</sup> is maximal dissipative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and maximal accumulative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. In particular, <sup>H</sup> is closed. -

The next proposition provides a direct sum decomposition of H based on the construction in Proposition 1.6.8.

**Proposition 1.6.9.** Let <sup>S</sup> be a closed symmetric relation in <sup>H</sup> and let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then for λ in the same half-plane as μ there is the direct sum decomposition

$$\mathfrak{H} = \text{ran}\left( S - \lambda \right) + \ker \left( S^\* - \overline{\mu} \right). \tag{1.6.6}$$

Proof. Let the relation <sup>H</sup>(μ) be defined by <sup>H</sup>(μ) = <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>μ</sup>(S∗). A straightforward calculation involving the Cayley transform

$$\mathcal{C}\_{\mu}[H(\overline{\mu})] = \left\{ \{h' - \mu h + \eta, h' - \overline{\mu}h\} \, : \, \{h, h'\} \in S, \ \eta \in \mathfrak{N}\_{\overline{\mu}}(S^\*) \right\},$$

yields the identity

$$\begin{split} &I - \frac{\lambda - \mu}{\lambda - \overline{\mu}} \mathbb{C}\_{\mu}[H(\overline{\mu})] \\ &= \left\{ \left\{ h' - \mu h + \eta, \frac{\mu - \overline{\mu}}{\lambda - \overline{\mu}} (h' - \lambda h) + \eta \right\} : \{ h, h' \} \in S, \ \eta \in \mathfrak{N}\_{\mathbb{F}}(S^\*) \right\}. \end{split} \tag{1.6.7}$$

Note that the domain and the range of this relation are given by

ran (S − μ) ⊕ ker (S<sup>∗</sup> − μ) = H and ran (S − λ) + ker (S<sup>∗</sup> − μ), (1.6.8)

respectively.

Now observe that by Proposition 1.6.8 the relation H(μ) is maximal dissipative for <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> or maximal accumulative for <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+, and thus the Cayley transform Cμ[H(μ)] is a contraction defined on H; cf. Proposition 1.6.6. Due to the assumption about λ, one has |λ − μ| < |λ − μ| and hence the left-hand side of (1.6.7) is a bijection from H onto H. Therefore, the decomposition (1.6.6) in the lemma is now concluded from the second identity in (1.6.8). To see that the decomposition (1.6.6) is direct, assume that the second component on the right-hand side of (1.6.7) is zero. Then the first component must be zero, so that h- = μh and η = 0. Since S is symmetric, it follows from {h, h- } ∈ S and h- = μh that h = 0 and h-= 0. Thus, indeed, the sum in (1.6.6) is direct. -

The next assertion is a special case of Lemma 1.4.10 for maximal dissipative and maximal accumulative extensions.

**Lemma 1.6.10.** Let S be a closed symmetric relation in H and let H ⊂ S<sup>∗</sup> be a maximal dissipative (maximal accumulative ) extension of <sup>S</sup>. Then for λ, μ <sup>∈</sup> <sup>C</sup><sup>−</sup> (λ, μ <sup>∈</sup> <sup>C</sup><sup>+</sup>)

$$I + (\lambda - \mu)(H - \lambda)^{-1} \tag{1.6.9}$$

is boundedly invertible with inverse <sup>I</sup> + (<sup>μ</sup> <sup>−</sup> <sup>λ</sup>)(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup>. For fixed <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> (<sup>μ</sup> <sup>∈</sup> <sup>C</sup>+), the mapping (1.6.9) is holomorphic in <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>+). Moreover, the operator in (1.6.9) maps Nμ(S∗) bijectively onto Nλ(S∗).

The following useful fact about the closed span (denoted by span ) of the defect spaces of a symmetric relation will be used in Chapter 3.

**Lemma 1.6.11.** Let <sup>S</sup> be a closed symmetric relation in <sup>H</sup>. Let <sup>U</sup><sup>−</sup> <sup>⊂</sup> <sup>C</sup><sup>−</sup> be a set which has an accumulation point in <sup>C</sup>−, and let <sup>U</sup><sup>+</sup> <sup>⊂</sup> <sup>C</sup><sup>+</sup> be a set which has an accumulation point in C+. Then

$$\overline{\text{span}}\left\{\mathfrak{N}\_{\lambda}(S^{\*}):\lambda\in\mathcal{U}^{-}\right\}=\overline{\text{span}}\left\{\mathfrak{N}\_{\lambda}(S^{\*}):\lambda\in\mathbb{C}^{-}\right\}\tag{1.6.10}$$

and

$$\overline{\text{span}}\left\{\mathfrak{N}\_{\lambda}(S^{\*}):\lambda\in\mathcal{U}^{+}\right\}=\overline{\text{span}}\left\{\mathfrak{N}\_{\lambda}(S^{\*}):\lambda\in\mathbb{C}^{+}\right\}.\tag{1.6.11}$$

Proof. The equality (1.6.10) will be shown, the proof of (1.6.11) is analogous. Note also that the inclusion (⊂) in (1.6.10) is clear and hence it remains to verify the inclusion (⊃) in (1.6.10). It is sufficient to show that

$$\left(\text{span}\left\{\mathfrak{N}\_{\lambda}(S^{\*}):\lambda\in\mathcal{U}^{-}\right\}\right)^{\perp}\subset\left(\text{span}\left\{\mathfrak{N}\_{\lambda}(S^{\*}):\lambda\in\mathbb{C}^{-}\right\}\right)^{\perp}\tag{1.6.12}$$

holds. Fix a maximal dissipative extension H of S, see Proposition 1.6.8, and assume that f ∈ H belongs to the left-hand side of (1.6.12). Then for all λ ∈ U<sup>−</sup> and <sup>f</sup><sup>λ</sup> <sup>∈</sup> <sup>N</sup>λ(S∗) one has (fλ, f) = 0. By Lemma 1.6.10, for <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> the mapping

$$
\lambda \mapsto I + (\lambda - \mu)(H - \lambda)^{-1} \tag{1.6.13}
$$

is holomorphic in C<sup>−</sup> and the operator in (1.6.13) maps Nμ(S∗) bijectively onto <sup>N</sup>λ(S∗). Fix <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> and <sup>f</sup><sup>μ</sup> <sup>∈</sup> <sup>N</sup>μ(S∗), and consider the element

$$f\_{\lambda} = (I + (\lambda - \mu)(H - \lambda)^{-1})f\_{\mu}.$$

Then for all λ ∈ U<sup>−</sup> one has

$$\left( (I + (\lambda - \mu)(H - \lambda)^{-1}) f\_{\mu}, f \right) = (f\_{\lambda}, f) = 0.$$

Since the function <sup>λ</sup> → ((<sup>I</sup> + (<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)(<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1)fμ, f)=(fλ, f) is holomorphic on <sup>C</sup><sup>−</sup> and vanishes on <sup>U</sup>−, one must have for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup>

$$\left( (I + (\lambda - \mu)(H - \lambda)^{-1}) f\_{\mu}, f \right) = (f\_{\lambda}, f) = 0.$$

Since f<sup>μ</sup> ∈ Nμ(S∗) was arbitrary and (1.6.13) maps Nμ(S∗) bijectively onto <sup>N</sup>λ(S∗) it follows that (fλ, f) = 0 for all <sup>f</sup><sup>λ</sup> <sup>∈</sup> <sup>N</sup>λ(S∗) and all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. Therefore, f belongs to the right-hand side of (1.6.12). -

Here is a variant of the decomposition in Theorem 1.3.16 for closed dissipative (accumulative) relations; cf. Theorem 1.4.11 and Theorem 1.5.1.

**Theorem 1.6.12.** Let H be a closed dissipative (accumulative ) relation in H and decompose H as H = Hop ⊕ Hmul , Hop := (mul H)<sup>⊥</sup> and Hmul := mul H, and denote the orthogonal projection from H onto Hop by Pop . Then H is the direct orthogonal sum <sup>H</sup>op <sup>⊕</sup> <sup>H</sup>mul of the closed dissipative (accumulative ) operator

$$H\_{\mathrm{op}} = \left\{ \{ f, P\_{\mathrm{op}} f' \} : \{ f, f' \} \in H \right\}.$$

in Hop and the self-adjoint purely multivalued relation

$$H\_{\text{mul}} = \left\{ \{0, f'\} : f' \in \mathfrak{H}\_{\text{mul}} \right\} = \left\{ \{0, (I - P\_{\text{op}})f'\} : \{f, f'\} \in H \right\}$$

in Hmul . Moreover, the operator Hop is densely defined in Hop if and only if mul H = mul H∗. If the relation H is maximal dissipative (maximal accumulative ), then Hop is a densely defined maximal dissipative (maximal accumulative ) operator in Hop .

Proof. The proof follows the proof of Theorem 1.4.11 for symmetric relations. In order to apply Theorem 1.3.16 now one has to recall that mul H = mul H∗∗ and the inclusion mul H ⊂ mul H<sup>∗</sup> holds by Lemma 1.6.2. The assertion about maximality follows from Lemma 1.6.2. -

## **1.7 Intermediate extensions and von Neumann's formulas**

In this section intermediate extensions of a symmetric relation will be studied, with special attention paid to disjoint and transversal extensions. Furthermore, some important decompositions of intermediate extensions and the adjoint relation will be discussed. In particular, these investigations lead to the von Neumann formulas in the context of relations, which provide a description of accumulative, dissipative, symmetric, and self-adjoint extensions in terms of contractive, isometric, and unitary operators between the defect spaces of the symmetric relation.

The first result is a decomposition of a relation in a Hilbert space which has a closed restriction with nonempty resolvent set. As in Definition 1.4.9, the following notations are associated with the eigenspace of a relation <sup>T</sup> at <sup>λ</sup> <sup>∈</sup> <sup>C</sup>:

$$
\mathfrak{N}\_{\lambda}(T) = \ker \left( T - \lambda \right) \quad \text{and} \quad \dot{\mathfrak{N}}\_{\lambda}(T) = \left\{ \left\{ f\_{\lambda}, \lambda f\_{\lambda} \right\} : f\_{\lambda} \in \mathfrak{N}\_{\lambda}(T) \right\}.
$$

**Theorem 1.7.1.** Let T be a relation in H and let the relation H be a restriction of <sup>T</sup>. If ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) = <sup>H</sup> for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, then

$$T = H \stackrel{\frown}{+} \hat{\mathfrak{N}}\_{\lambda}(T) \quad \text{and} \quad H \cap \hat{\mathfrak{N}}\_{\lambda}(T) = \hat{\mathfrak{N}}\_{\lambda}(H). \tag{1.7.1}$$

Assume, in addition, that H is closed and λ ∈ ρ(H). Then the decomposition in (1.7.1) holds, the sum is direct, and

T is closed if and only if Nλ(T) is closed.

Proof. Since <sup>N</sup> <sup>λ</sup>(T) <sup>⊂</sup> <sup>T</sup>, one has the inclusion <sup>H</sup> <sup>+</sup> <sup>N</sup> <sup>λ</sup>(T) <sup>⊂</sup> <sup>T</sup>. To see the opposite inclusion, let {f,f- } ∈ T. Since ran (H − λ) = H, there exists an element {h, h- } ∈ H ⊂ T such that f- − λf = h-− λh. It follows that

$$\{f, f'\} - \{h, h'\} = \{f - h, f' - h'\} = \{f - h, \lambda(f - h)\} \in \widehat{\mathfrak{N}}\_{\lambda}(T),$$

which shows that {f,f- } ∈ <sup>H</sup> <sup>+</sup> <sup>N</sup> <sup>λ</sup>(T) and thus <sup>T</sup> <sup>⊂</sup> <sup>H</sup> <sup>+</sup> <sup>N</sup> <sup>λ</sup>(T). The statement <sup>H</sup> <sup>∩</sup> <sup>N</sup> <sup>λ</sup>(T) = <sup>N</sup> <sup>λ</sup>(H) is immediate.

Now assume that the relation H is closed and λ ∈ ρ(H). Then the conditions ran (H − λ) = H and ker (H − λ) = {0} are satisfied and hence the decomposition in (1.7.1) holds and the sum is direct. If T is closed, then clearly Nλ(T) is closed. To prove the converse implication consider the linear mapping B : H × H → H defined by B{f,f- } = f- − λf. Clearly, B is a bounded operator with ran B = H and

$$\ker B = \left\{ \{ f, \lambda f \} : f \in \mathfrak{H} \right\}.$$

Consider the relation H as a closed subspace of H × H. Then

$$BH = \text{ran}\,(H - \lambda) = \mathfrak{H} \quad \text{and} \quad H \cap \ker B = \dot{\mathfrak{N}}\_{\lambda}(H) = \{0\}$$

since <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(H). Hence, BH is closed, which implies that the sum <sup>H</sup> + ker <sup>B</sup> is closed by Lemma C.4. Moreover, the sum <sup>H</sup> + ker <sup>B</sup> is direct. By assumption, <sup>N</sup> <sup>λ</sup>(T) is a closed subspace of ker <sup>B</sup>, which implies that also <sup>H</sup> <sup>+</sup> <sup>N</sup> <sup>λ</sup>(T) = <sup>T</sup> is closed; cf. Corollary C.7. -

The next preliminary lemma contains some useful observations.

**Lemma 1.7.2.** Let H and K be closed relations in H. Then, for all λ ∈ ρ(H)∩ρ(K),

$$\text{ran}\left(\left(H \cap K\right) - \lambda\right) = \text{ker}\left(\left(K - \lambda\right)^{-1} - \left(H - \lambda\right)^{-1}\right)$$

and

$$\ker\left(\left(H+\widehat{\ }K\right)-\lambda\right) = \text{ran}\left(\left(K-\lambda\right)^{-1}-\left(H-\lambda\right)^{-1}\right).$$

Proof. In order to prove the first equality let λ ∈ ρ(H) ∩ ρ(K) and assume that g ∈ ran ((H ∩ K) − λ). Then g = h- − λh for some {h, h- } ∈ H ∩ K and it follows that

$$(H - \lambda)^{-1}(h' - \lambda h) = h \quad \text{and} \quad (K - \lambda)^{-1}(h' - \lambda h) = h.$$

Hence, ((<sup>K</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>−</sup> (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup>)<sup>g</sup> = 0 and this shows the inclusion

$$\text{ran}\left(\left(H \cap K\right) - \lambda\right) \subset \ker\left(\left(K - \lambda\right)^{-1} - \left(H - \lambda\right)^{-1}\right).$$

To prove the opposite inclusion, let <sup>g</sup> <sup>∈</sup> ker ((<sup>K</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>−</sup> (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup>). Then with <sup>k</sup> = (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup><sup>g</sup> = (<sup>K</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup><sup>g</sup> it follows that {k, g <sup>+</sup> λk} ∈ <sup>H</sup> <sup>∩</sup> <sup>K</sup>. Consequently, g ∈ ran ((H ∩ K) − λ) and therefore

$$\ker\left( (K-\lambda)^{-1} - (H-\lambda)^{-1} \right) \subset \text{ran}\left( (H \cap K) - \lambda \right).$$

As to the second equality, let λ ∈ ρ(H) ∩ ρ(K) and {f,f- } ∈ <sup>H</sup> <sup>+</sup> <sup>K</sup>. Then according to Lemma 1.2.4 one has the representation

$$\begin{aligned} \{f, f'\} &= \left\{ (K - \lambda)^{-1} h, h + \lambda (K - \lambda)^{-1} h \right\} \\ &\quad + \left\{ (H - \lambda)^{-1} \right\} k, k + \lambda (H - \lambda)^{-1} \lambda k \end{aligned}$$

for some h, k <sup>∈</sup> <sup>H</sup>. Clearly, {f, λf} ∈ <sup>H</sup> <sup>+</sup> <sup>K</sup> if and only if <sup>h</sup> <sup>+</sup> <sup>k</sup> = 0 or, equivalently, <sup>f</sup> = (<sup>K</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>h</sup> <sup>−</sup> (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>h</sup> for some <sup>h</sup> <sup>∈</sup> <sup>H</sup>. This proves the second equality. -

Let H and K be closed relations in H. Then the intersection H ∩K is closed, but the componentwise sum <sup>H</sup> <sup>+</sup> <sup>K</sup> is in general not closed; cf. Proposition 1.3.12 and (1.3.5). An application of Theorem 1.7.1 and Lemma 1.7.2 gives the following characterization.

**Theorem 1.7.3.** Let H and K be closed relations in H such that ρ(H) ∩ ρ(K) = ∅. Then the sum <sup>H</sup> <sup>+</sup> <sup>K</sup> is closed if and only if

$$\text{ran}\left( (K - \lambda)^{-1} - (H - \lambda)^{-1} \right)$$

is closed for some, and hence for all λ ∈ ρ(H) ∩ ρ(K).

Proof. Let λ ∈ ρ(H) ∩ ρ(K). Then by Lemma 1.7.2

$$\text{ran}\left( (K - \lambda)^{-1} - (H - \lambda)^{-1} \right)$$

is closed if and only if ker (<sup>H</sup> <sup>+</sup> <sup>K</sup> <sup>−</sup> <sup>λ</sup>) is closed. Now note that the relation <sup>H</sup> is closed, that <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(H), and that <sup>H</sup> is a restriction of <sup>H</sup> <sup>+</sup> <sup>K</sup>, so that by Theorem 1.7.1

$$H \stackrel\frown{+} K = H \stackrel\frown{+} \Re\_{\lambda} (H \stackrel\frown{+} K) .$$

Moreover, it follows from Theorem 1.7.1 that ker (<sup>H</sup> <sup>+</sup> <sup>K</sup> <sup>−</sup>λ) is closed if and only if <sup>H</sup> <sup>+</sup> <sup>K</sup> is closed. -

Next follow some consequences of Theorem 1.7.1 and Theorem 1.7.3 in the context of closed symmetric relations. They are stated in terms of intermediate extensions.

**Definition 1.7.4.** Let S be a closed symmetric relation in H. A relation H is said to be an intermediate extension of S if S ⊂ H ⊂ S∗.

For instance, H defined in (1.6.5) is an intermediate extension of S. In general, an extension H of S need not be a restriction of S∗. However, if H is symmetric, then H ⊂ H∗, and it follows from S ⊂ H and H<sup>∗</sup> ⊂ S<sup>∗</sup> that S ⊂ H ⊂ S∗. Hence, symmetric and self-adjoint extensions of S are intermediate.

For intermediate extensions with nonempty resolvent set one obtains the following decomposition from Theorem 1.7.1.

**Corollary 1.7.5.** Let S be a closed symmetric relation in H. If H is a closed intermediate extension of S such that ρ(H) = ∅, then

$$S^\* = H \stackrel{\frown}{+} \dot{\mathfrak{N}}\_{\lambda}(S^\*)\tag{1.7.2}$$

for all λ ∈ ρ(H), and the sum in the decomposition (1.7.2) is direct. Furthermore, if H ∈ **B**(H), then

$$S^\* = H \stackrel{\frown}{+} \mathfrak{N}\_{\infty}(S^\*),\tag{1.7.3}$$

and the sum in the decomposition (1.7.3) is direct.

Proof. The direct sum decomposition (1.7.2) follows from Theorem 1.7.1. In order to prove (1.7.3), note that the inclusion (⊃) is clear. For the inclusion (⊂) take {f,f- } ∈ S∗. Then {f,Hf} ∈ H ⊂ S<sup>∗</sup> and hence {0, f- − Hf} ∈ S∗. Thus, {f,f- } = {f,Hf} + {0, f- <sup>−</sup> Hf} ∈ <sup>H</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗). -

Next the notions of disjointness and transversality of two intermediate extensions are defined.

**Definition 1.7.6.** Let S be a closed symmetric relation in H. If H and K are closed intermediate extensions of S, then they are called disjoint if H ∩ K = S, and they are called transversal if they are disjoint and <sup>H</sup> <sup>+</sup> <sup>K</sup> <sup>=</sup> <sup>S</sup>∗.

Let H and K be closed intermediate extensions of S. By Proposition 1.3.12, (<sup>H</sup> <sup>∩</sup> <sup>K</sup>)<sup>∗</sup> = clos (H<sup>∗</sup> <sup>+</sup> <sup>K</sup>∗) and hence <sup>H</sup> and <sup>K</sup> are disjoint if and only if <sup>S</sup><sup>∗</sup> = clos (H<sup>∗</sup> <sup>+</sup> <sup>K</sup>∗). In the next lemma self-adjoint intermediate extensions are considered.

**Lemma 1.7.7.** Let S be a closed symmetric relation in H and let H and K be self-adjoint extensions of S. Then the following statements hold:


Consequently, if H and K are disjoint, then they are transversal if and only if <sup>H</sup> <sup>+</sup> <sup>K</sup> is closed.

Proof. (i) follows from the discussion before the lemma and the assumption that H and K are self-adjoint.

(ii) The implication (⇒) is clear and hence only the implication (⇐) has to be checked. But if <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>H</sup> <sup>+</sup> <sup>K</sup>, then Proposition 1.3.12 implies

$$S = \left( H \stackrel\frown{\rightarrow} K \right)^\* = H^\* \cap K^\* = H \cap K,$$

and hence <sup>H</sup> and <sup>K</sup> are disjoint. Together with <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>H</sup> <sup>+</sup> <sup>K</sup> this shows that <sup>H</sup> and K are transversal. -

The next theorem provides useful criteria for disjointness and transversality of self-adjoint extensions.

**Theorem 1.7.8.** Let S be a closed symmetric relation in H and let H and K be self-adjoint extensions of S. Then the following statements hold:

(i) H and K are disjoint if and only if

$$\text{ran}\left(S - \lambda\right) = \text{ker}\left(\left(K - \lambda\right)^{-1} - \left(H - \lambda\right)^{-1}\right) \tag{1.7.4}$$

for some, and hence for all λ ∈ ρ(H) ∩ ρ(K);

(ii) H and K are transversal if and only if

$$\ker\left(S^\*-\lambda\right) = \text{ran}\left(\left(K-\lambda\right)^{-1} - \left(H-\lambda\right)^{-1}\right) \tag{1.7.5}$$

for some, and hence for all λ ∈ ρ(H) ∩ ρ(K).

Proof. (i) If S = H ∩K, then (1.7.4) holds for all λ ∈ ρ(H)∩ρ(K) by Lemma 1.7.2. Conversely, if (1.7.4) holds for some λ ∈ ρ(H) ∩ ρ(K), then Lemma 1.7.2 shows that

$$\text{ran}\left(S - \lambda\right) = \text{ran}\left(\left(H \cap K\right) - \lambda\right).$$

Since λ ∈ ρ(H) and both H ∩ K and S are restrictions of H, one has

$$\ker\left(\left(H \cap K\right) - \lambda\right) = \{0\} = \ker\left(S - \lambda\right).$$

Clearly, S − λ ⊂ (H ∩ K) − λ and now the equality S − λ = (H ∩ K) − λ follows from Corollary 1.1.3. This implies S = H ∩ K.

(ii) If <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>H</sup> <sup>+</sup> <sup>K</sup>, then (1.7.5) holds for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(H) <sup>∩</sup> <sup>ρ</sup>(K) by Lemma 1.7.2. Conversely, assume that (1.7.5) holds for some λ ∈ ρ(H) ∩ ρ(K) and let

$$T = H \stackrel{\cdot}{+} K.$$

Since H ⊂ S<sup>∗</sup> and H ⊂ T, it follows from Theorem 1.7.1 that

$$S^\* = H \stackrel\frown{\rightarrow} \dot{\mathfrak{N}}\_\lambda(S^\*) \quad \text{and} \quad T = H \stackrel\frown{\rightarrow} \dot{\mathfrak{N}}\_\lambda(T).$$

By Lemma 1.7.2, the assumption (1.7.5) means that ker (S<sup>∗</sup> − λ) = ker (T − λ). Therefore, <sup>N</sup> <sup>λ</sup>(S∗) = <sup>N</sup> <sup>λ</sup>(T) and <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>T</sup> <sup>=</sup> <sup>H</sup> <sup>+</sup> <sup>K</sup>. Now Lemma 1.7.7 (ii) implies that H and K are transversal. -

In the next result a closed intermediate extension H with ρ(H) = ∅ of S is decomposed into the direct sum of S and another closed subspace in H. Recall that in Proposition 1.6.8 it was shown that there always exist closed intermediate extensions with this property.

**Proposition 1.7.9.** Let <sup>S</sup> be a closed symmetric relation in <sup>H</sup> and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Let H be a closed intermediate extension of S with λ ∈ ρ(H). Then

$$H = S \cdot \widehat{+} \left\{ \{ (H - \lambda)^{-1} f\_{\widetilde{\lambda}}, (I + \lambda(H - \lambda)^{-1}) f\_{\widetilde{\lambda}} \} : f\_{\widetilde{\lambda}} \in \mathfrak{N}\_{\widetilde{\lambda}}(S^\*) \right\} \tag{1.7.6}$$

and the sum is direct.

Proof. In order to show (1.7.6) observe that by Lemma 1.2.4 and S ⊂ H the right-hand side is contained in H. To see the opposite inclusion, let {h, h- } ∈ H. Since <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, one has

$$\mathfrak{H} = \text{ran}\left( S - \lambda \right) \oplus \ker \left( S^\* - \overline{\lambda} \right) = \text{ran}\left( S - \lambda \right) \oplus \mathfrak{N}\_{\overline{\lambda}}(S^\*).$$

Due to this decomposition, there exist {f,f- } ∈ S and f<sup>λ</sup> ∈ Nλ(S∗) such that

$$h' - \lambda h = f' - \lambda f + f\_{\overline{\lambda}}.$$

Hence, it follows from {h − f,h- − f- } ∈ H that {h − f,fλ} ∈ H − λ,

$$h - f = (H - \lambda)^{-1} f\_{\overline{\lambda}}, \quad \text{and} \quad h' - f' = f\_{\overline{\lambda}} + \lambda (H - \lambda)^{-1} f\_{\overline{\lambda}},$$

and therefore

$$\{h, h'\} - \{f, f'\} = \{h - f, h' - f'\} = \left\{ (H - \lambda)^{-1} f\_{\overline{\lambda}}, (I + \lambda(H - \lambda)^{-1} f\_{\overline{\lambda}}) \right\}.$$

Thus, {h, h- } belongs to the right-hand side of (1.7.6).

In order to show that the sum in (1.7.6) is direct, assume that

$$\left\{ (H - \lambda)^{-1} f\_{\widetilde{\lambda}}, (I + \lambda(H - \lambda)^{-1}) f\_{\widetilde{\lambda}} \right\} \in S$$

for some f<sup>λ</sup> ∈ Nλ(S∗). Then since {fλ, λfλ} ∈ S<sup>∗</sup> it follows that

$$\left( (I + \lambda (H - \lambda)^{-1}) f\_{\overline{\lambda}}, f\_{\overline{\lambda}} \right) = \left( (H - \lambda)^{-1} f\_{\overline{\lambda}}, \overline{\lambda} f\_{\overline{\lambda}} \right),$$

which leads to f<sup>λ</sup> = 0. -

The next statement is a consequence of Corollary 1.7.5 and Proposition 1.7.9.

**Corollary 1.7.10.** Let <sup>S</sup> be a closed symmetric relation in <sup>H</sup> and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Let H be a closed intermediate extension of S with λ ∈ ρ(H). Then

$$S^\* = S \cdot \hat{+} \left\{ \left\{ (H - \lambda)^{-1} f\_{\overline{\lambda}}, (I + \lambda(H - \lambda))^{-1} f\_{\overline{\lambda}} \right\} : f\_{\overline{\lambda}} \in \mathfrak{N}\_{\overline{\lambda}}(S^\*) \right\} \hat{+} \dot{\mathfrak{N}}\_{\lambda}(S^\*) $$

and all sums are direct.

$$\bot$$

The following result is von Neumann's first formula, stated in the context of a closed symmetric relation S. This decomposition of S<sup>∗</sup> into the direct sum of S and two defect subspaces corresponding to two points in the upper and lower half-plane can be viewed as a consequence of Corollary 1.7.5 and Proposition 1.6.8.

**Theorem 1.7.11.** Let <sup>S</sup> be a closed symmetric relation in <sup>H</sup> and let λ, μ <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> be in the same half-plane. Then

$$S^\* = S \stackrel{\frown}{\rightarrow} \dot{\mathfrak{N}}\_{\lambda}(S^\*) \stackrel{\frown}{\rightarrow} \dot{\mathfrak{N}}\_{\overline{\mu}}(S^\*), \quad direct\ sums. \tag{1.7.7}$$

The sums are orthogonal in <sup>H</sup><sup>2</sup> when <sup>λ</sup> <sup>=</sup> <sup>μ</sup> <sup>=</sup> <sup>±</sup>i.

Proof. Assume that λ, μ <sup>∈</sup> <sup>C</sup>+. By Proposition 1.6.8, the relation

$$H = S \stackrel{\frown}{+} \mathfrak{N}\_{\overline{\mu}}(S^\*),$$

is a maximal accumulative intermediate extension of S, and the sum is direct. Since <sup>C</sup><sup>+</sup> <sup>⊂</sup> <sup>ρ</sup>(H) by Theorem 1.6.4 and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>+, it follows from Corollary 1.7.5 that <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>H</sup> <sup>+</sup> <sup>N</sup> <sup>λ</sup>(S∗) and the sum is direct. Hence, (1.7.7) follows. The case where λ, μ <sup>∈</sup> <sup>C</sup><sup>−</sup> is completely similar.

It is a simple calculation to show that <sup>N</sup> <sup>i</sup>(S∗) and <sup>N</sup> <sup>−</sup>i(S∗) are orthogonal in <sup>H</sup>2. The orthogonality of <sup>S</sup> and <sup>N</sup> <sup>±</sup>i(S∗) in <sup>H</sup><sup>2</sup> follows from

$$(f, f\_{\pm i}) + (f', \pm i f\_{\pm i}) = (f, f\_{\pm i}) \mp i(f', f\_{\pm i}) = (f, f\_{\pm i}) \mp i(f, \pm i f\_{\pm i}) = 0,$$

$$\square\_i = \operatorname\*{arg\,min}\_{\text{reject } i \text{ a.e. of } f(f) = C} \operatorname\*{arg\,min}\_{\text{reject } i \text{ a.e. of } f(f)} \quad (\triangleleft) \quad (\triangleleft)$$

where it was used that {f,f- } ∈ <sup>S</sup> and <sup>N</sup> <sup>±</sup>i(S∗) <sup>⊂</sup> <sup>S</sup>∗. -

The next result is von Neumann's second formula, stated in the context of a closed symmetric relation S. It describes all symmetric extensions of S in terms of isometric operators between the defect spaces Nμ(S∗) and Nμ(S∗) appearing in Theorem 1.7.11. The following notation will be useful. Let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and let <sup>M</sup><sup>λ</sup> be a closed linear subspace of <sup>N</sup>λ(S∗). Then <sup>M</sup><sup>λ</sup> denotes the closed linear subspace of <sup>N</sup> <sup>λ</sup>(S∗) defined by

$$\bar{\mathfrak{M}}\_{\lambda} = \left\{ \{f\_{\lambda}, \lambda f\_{\lambda}\} \in S^\* \,:\, f\_{\lambda} \in \mathfrak{M}\_{\lambda} \right\}.$$

Now let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and let <sup>W</sup> be a bounded linear mapping from a closed linear subspace M<sup>μ</sup> of Nμ(S∗) to Nμ(S∗). Then W induces a linear mapping W from <sup>M</sup><sup>μ</sup> to <sup>N</sup> <sup>μ</sup>(S∗) by

$$W\{f\_{\overline{\mu}}, \overline{\mu}f\_{\overline{\mu}}\} = \{Wf\_{\overline{\mu}}, \mu Wf\_{\overline{\mu}}\}.$$

Clearly, W is bounded and W = W . In fact, every bounded linear mapping from <sup>M</sup><sup>μ</sup> to <sup>N</sup> <sup>μ</sup>(S∗) is of this form. To see this, it suffices to observe that if W {fμ, μfμ} <sup>=</sup> {gμ, μgμ}, then the mapping <sup>f</sup><sup>μ</sup> <sup>∈</sup> <sup>M</sup><sup>μ</sup> → <sup>g</sup><sup>μ</sup> <sup>∈</sup> <sup>N</sup>μ(S∗) is linear. Moreover, this mapping is also bounded since

$$\sqrt{1+|\mu|^2} \|g\_{\mu}\| \le \|\widehat{W}\| \sqrt{1+|\overline{\mu}|^2} \|f\_{\overline{\mu}}\|,$$

thanks to the boundedness of W and the structure of the standard inner product. **Theorem 1.7.12.** Let <sup>S</sup> be a closed symmetric relation in <sup>H</sup> and let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then H is a closed symmetric extension of S if and only if there exists an isometric operator W mapping a closed subspace M<sup>μ</sup> ⊂ Nμ(S∗) onto a closed subspace M<sup>μ</sup> ⊂ Nμ(S∗), such that

$$H = S \stackrel{\frown}{+} (I - \widehat{W}) \widehat{\mathfrak{M}}\_{\overline{\mu}}.\tag{1.7.8}$$

The closed symmetric extension H is maximal if and only if M<sup>μ</sup> = Nμ(S∗) or M<sup>μ</sup> = Nμ(S∗) holds. Furthermore, the extension H is self-adjoint if and only if M<sup>μ</sup> = Nμ(S∗) and M<sup>μ</sup> = Nμ(S∗) hold.

Let M<sup>μ</sup> ⊂ Nμ(S∗) and M<sup>μ</sup> ⊂ Nμ(S∗) be closed subspaces and observe that isometric operators W from M<sup>μ</sup> onto M<sup>μ</sup> exist if and only if the dimensions of the spaces M<sup>μ</sup> and M<sup>μ</sup> coincide. This implies the following statement.

**Corollary 1.7.13.** Let S be a closed symmetric relation in H. Then S admits selfadjoint extensions H in H if and only if

$$\dim \mathfrak{N}\_{\mu}(S^\*) = \dim \mathfrak{N}\_{\overline{\mu}}(S^\*)$$

for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

Proof of Theorem 1.7.12. (⇒) Let H be a closed symmetric extension of S, let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and consider the Cayley transforms

$$V := \mathbb{C}\_{\mu}[S] = \left\{ \{f' - \mu f, f' - \overline{\mu}f\} : \{f, f'\} \in S \right\},$$

and

$$U := \mathbb{C}\_{\mu}[H] = \left\{ \{h' - \mu h, h' - \overline{\mu}h\} : \{h, h'\} \in H \right\}.$$

of S and H, respectively. According to Proposition 1.4.8, V is a closed isometric operator from the closed subspace ran (S−μ) onto the closed subspace ran (S−μ), and U is an isometric extension of V from the closed subspace ran (H − μ) onto the closed subspace ran (H − μ). It follows that there exist closed subspaces

$$
\mathfrak{M}\_{\overline{\mu}} \subset \mathfrak{N}\_{\overline{\mu}}(S^\*) = \text{ran}\,(S - \mu)^\perp \quad \text{and} \quad \mathfrak{M}\_{\mu} \subset \mathfrak{N}\_{\mu}(S^\*) = \text{ran}\,(S - \overline{\mu})^\perp,
$$

such that

$$
\tan\left(H - \mu\right) = \tan\left(S - \mu\right) \oplus \mathfrak{M}\_{\overline{\mu}} \quad \text{and} \quad \tan\left(H - \overline{\mu}\right) = \tan\left(S - \overline{\mu}\right) \oplus \mathfrak{M}\_{\mu}.
$$

Let W be the restriction of U to Mμ. Then W maps M<sup>μ</sup> isometrically onto M<sup>μ</sup> and

$$U = \begin{pmatrix} V & 0 \\ 0 & W \end{pmatrix} : \begin{pmatrix} \tan\left(S - \mu\right) \\ \mathfrak{M}\_{\overline{\mu}} \end{pmatrix} \to \begin{pmatrix} \tan\left(S - \overline{\mu}\right) \\ \mathfrak{M}\_{\mu} \end{pmatrix}. \tag{1.7.9}$$

Taking the inverse Cayley transform leads to

$$\begin{aligned} H &= \mathcal{F}\_{\mu}[U] = \mathcal{F}\_{\mu}[V] + \mathcal{F}\_{\mu}[W] \\ &= S \widehat{+} \left\{ \{ (I - W) f\_{\overline{\mu}}, (\overline{\mu} - \mu W) f\_{\overline{\mu}} \} : f\_{\overline{\mu}} \in \mathfrak{M}\_{\overline{\mu}} \right\} \\ &= S \widehat{+} \left\{ \{ f\_{\overline{\mu}}, \overline{\mu} f\_{\overline{\mu}} \} - \{ W f\_{\overline{\mu}}, \mu W f\_{\overline{\mu}} \} : f\_{\overline{\mu}} \in \mathfrak{M}\_{\overline{\mu}} \right\}, \end{aligned}$$

which implies (1.7.8). In the case where the closed symmetric extension H is maximal, ran (H − μ) = H or ran (H − μ) = H by (1.4.7), and hence one has M<sup>μ</sup> = Nμ(S∗) or M<sup>μ</sup> = Nμ(S∗), respectively. If H is self-adjoint, then it is clear that ran (H −μ) = H = ran (H −μ) by Theorem 1.5.5 and therefore M<sup>μ</sup> = Nμ(S∗) and M<sup>μ</sup> = Nμ(S∗).

(⇐) Let <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, assume that <sup>W</sup> is an isometric operator from a closed subspace M<sup>μ</sup> ⊂ Nμ(S∗) onto a closed subspace M<sup>μ</sup> ⊂ Nμ(S∗), and consider the relation <sup>H</sup> <sup>=</sup> <sup>S</sup> + ( <sup>I</sup> <sup>−</sup> <sup>W</sup> )Mμ. Let <sup>V</sup> be the Cayley transform of <sup>S</sup>, define the operator <sup>U</sup> as in (1.7.9), and note that

$$H = S \stackrel{\frown}{+} (I - \dot{W}) \dot{\mathfrak{M}}\_{\overline{\mu}} = \mathcal{F}\_{\mu}[V] \stackrel{\frown}{+} \mathcal{F}\_{\mu}[W] = \mathcal{F}\_{\mu}[U]$$

holds. Since U is isometric and closed, it follows from Proposition 1.4.8 that H is a closed symmetric relation. If M<sup>μ</sup> = Nμ(S∗) or M<sup>μ</sup> = Nμ(S∗), then one has dom U = H or ranU = H, respectively, and therefore one sees that ran (H −μ) = H or ran (H − μ) = H, respectively, so that H is a maximal symmetric relation. Finally, if M<sup>μ</sup> = Nμ(S∗) and M<sup>μ</sup> = Nμ(S∗), then U is unitary and hence H is self-adjoint; cf. Proposition 1.4.8. -

The second von Neumann formula in Theorem 1.7.12 has a natural extension, which describes all accumulative (dissipative) extensions of S in terms of contractive operators between the defect spaces.

**Theorem 1.7.14.** Let S be a closed symmetric relation in H. Then H is a closed accumulative (closed dissipative ) extension of <sup>S</sup> if and only if for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>μ</sup> <sup>∈</sup> <sup>C</sup>−), there exists a contraction <sup>W</sup> mapping a closed subspace <sup>M</sup><sup>μ</sup> <sup>⊂</sup> <sup>N</sup>μ(S∗) to Nμ(S∗), such that

$$H = S \stackrel{\frown}{+} (I - \overleftarrow{W}) \dot{\mathfrak{M}}\_{\overline{\mu}}.\tag{1.7.10}$$

The closed accumulative (closed dissipative ) extension H is maximal if and only if <sup>M</sup><sup>μ</sup> <sup>=</sup> <sup>N</sup>μ(S∗) holds for <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>μ</sup> <sup>∈</sup> <sup>C</sup>−).

Proof. (⇒) Let <sup>H</sup> be a closed accumulative extension of <sup>S</sup> and let <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+. Define the Cayley transform V = Cμ[S] of S, so that V is a closed isometry from the closed subspace ran (S−μ) onto the closed subspace ran (S−μ). By Proposition 1.6.6 and (1.1.18), the Cayley transform U = Cμ[H] of H is a closed contractive extension of V from the closed subspace ran (H − μ) onto the subspace ran (H − μ). Then there exists a closed subspace M<sup>μ</sup> ⊂ Nμ(S∗) such that

$$\text{ran}\,(H-\mu) = \text{ran}\,(S-\mu) \oplus \mathfrak{M}\_{\mathfrak{F}}.$$

Let W be the restriction of U to Mμ. It will be shown that U is of the form

$$U = \begin{pmatrix} V & 0 \\ 0 & W \end{pmatrix} : \begin{pmatrix} \tan\left(S - \mu\right) \\ \Re\_{\overline{\mu}} \end{pmatrix} \to \begin{pmatrix} \tan\left(S - \overline{\mu}\right) \\ \Re\_{\mu}(S^\*) \end{pmatrix}. \tag{1.7.11}$$

For this it suffices to verify that the contractive operator W is a mapping from M<sup>μ</sup> to Nμ(S∗) = ker (S<sup>∗</sup> − μ). To see this, note that the restriction V of U is isometric. Hence, by (1.1.10),

$$(V\varphi, U\psi) = (\varphi, \psi), \quad \varphi \in \text{dom}\, V \subset \text{dom}\, U, \quad \psi \in \text{dom}\, U. \tag{1.7.12}$$

Observe that if ψ ∈ Mμ, then (ϕ, ψ) = 0 for all ϕ ∈ dom V . Thus, (1.7.12) implies that W ψ = Uψ ∈ (ran V )<sup>⊥</sup> = Nμ(S∗). This yields (1.7.11).

Taking the inverse Cayley transform of U in (1.7.11) leads (in the same way as in the proof of Theorem 1.7.12) to

$$\begin{aligned} H &= \mathcal{F}\_{\mu}[U] = \mathcal{F}\_{\mu}[V] \widehat{+} \mathcal{F}\_{\mu}[W] \\ &= S \widehat{+} \left\{ \{f\_{\overline{\mu}}, \overline{\mu}f\_{\overline{\mu}}\} - \{Wf\_{\overline{\mu}}, \mu Wf\_{\overline{\mu}}\} \, : \, f\_{\overline{\mu}} \in \mathfrak{M}\_{\overline{\mu}}\} \right\}, \end{aligned}$$

which implies (1.7.10). If H is maximal accumulative, then ran (H − μ) = H and hence M<sup>μ</sup> = Nμ(S∗).

(⇐) Let <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and assume that <sup>W</sup> is a contractive operator from a closed subspace <sup>M</sup><sup>μ</sup> <sup>⊂</sup> <sup>N</sup>μ(S∗) to <sup>N</sup>μ(S∗) and consider the relation <sup>H</sup> <sup>=</sup> <sup>S</sup> + ( <sup>I</sup>−<sup>W</sup> )Mμ. Let V be the Cayley transform of S, define the operator U as in (1.7.11), and note that

$$H = S \stackrel{\frown}{+} (I - \widehat{W}) \widehat{\mathfrak{M}}\_{\overline{\mu}} = \mathcal{F}\_{\mu}[V] \stackrel{\frown}{+} \mathcal{F}\_{\mu}[W] = \mathcal{F}\_{\mu}[U].$$

Since U is a closed contractive operator, it follows from Proposition 1.6.6 and (1.1.18) that H is a closed accumulative extension of S. If M<sup>μ</sup> = Nμ(S∗), then dom <sup>U</sup> <sup>=</sup> <sup>H</sup> and hence ran (<sup>H</sup> <sup>−</sup> <sup>μ</sup>) = <sup>H</sup>, so that <sup>H</sup> is maximal accumulative. -

## **1.8 Adjoint relations and indefinite inner products**

The adjoint of a relation in a Hilbert space H has a natural interpretation in terms of a certain indefinite inner product [[·, ·]]H<sup>2</sup> on the product space H × H. It will be shown that surjective operators which are isometric with respect to such indefinite inner products and have a closed domain are automatically bounded. Furthermore, some geometric transformation properties of operator-valued M¨obius transformations which are unitary with respect to indefinite inner products will be studied.

Let H be a relation in H. The adjoint relation H<sup>∗</sup> in Definition 1.3.1 satisfies H<sup>∗</sup> = (JH)<sup>⊥</sup> = JH⊥, where J is the flip-flop operator in (1.3.1) and the orthogonal complement refers to the componentwise inner product in the product space <sup>H</sup> <sup>×</sup> <sup>H</sup>; cf. (1.3.2). Define the operator <sup>J</sup> on the product space <sup>H</sup><sup>2</sup> as <sup>J</sup> <sup>=</sup> <sup>−</sup>iJ, where J is the flip-flop operator:

$$\mathcal{J} := -i \begin{pmatrix} 0 & I\_{\mathfrak{H}} \\ -I\_{\mathfrak{H}} & 0 \end{pmatrix} = \begin{pmatrix} 0 & -iI\_{\mathfrak{H}} \\ iI\_{\mathfrak{H}} & 0 \end{pmatrix}. \tag{1.8.1}$$

Sometimes the notation J<sup>H</sup> is used to indicate the underlying Hilbert space. Clearly, the operator J in (1.8.1) has the properties

$$
\mathcal{J} = \mathcal{J}^\* = \mathcal{J}^{-1} \in \mathbf{B}(\mathfrak{H}^2),
$$

so that J is unitary and self-adjoint. The operator J gives rise to an inner product [[·, ·]] on <sup>H</sup><sup>2</sup> as follows

$$\left[\widehat{h}, \widehat{k}\right] := \left(\mathcal{J}\widehat{h}, \widehat{k}\right)\_{\mathfrak{H}^2}, \qquad \widehat{h} = \begin{pmatrix} h \\ h' \end{pmatrix}, \ \widehat{k} = \begin{pmatrix} k \\ k' \end{pmatrix} \in \mathfrak{H}^2,\tag{1.8.2}$$

where for convenience <sup>h</sup> and <sup>k</sup> are written in vector notation. In the following sometimes an index is used to indicate in which space the indefinite inner product is defined, e.g., [[·, ·]]H<sup>2</sup> . Explicitly the new inner product is given by

$$\hat{i}\left[\hat{h},\hat{k}\right] = -i\left(\left(h',k\right)-\left(h,k'\right)\right), \qquad \hat{h} = \begin{pmatrix} h \\ h' \end{pmatrix}, \hat{k} = \begin{pmatrix} k \\ k' \end{pmatrix} \in \mathfrak{H}^2,\tag{1.8.3}$$

and note that

$$\left[\widehat{h}, \widehat{h}\right] = 2\operatorname{Im}\left(h', h\right), \qquad \widehat{h} = \begin{pmatrix} h \\ h' \end{pmatrix} \in \mathfrak{H}^2. \tag{1.8.4}$$

This shows that the new inner product on <sup>H</sup><sup>2</sup> is indefinite; and, in fact, (H2, [[·, ·]]) is a so-called Kre˘ın space. It follows from (1.8.2) that the inner product [[·, ·]] is continuous: if <sup>h</sup><sup>n</sup> <sup>→</sup> <sup>h</sup> and k<sup>m</sup> <sup>→</sup> <sup>k</sup> in <sup>H</sup><sup>2</sup> in the usual sense, then clearly

$$\left[\widehat{h}\_n, \widehat{k}\_m\right] \to \left[\widehat{h}, \widehat{k}\right], \quad \text{as} \quad m, n \to \infty.$$

For a linear subspace <sup>H</sup> of <sup>H</sup>2, the [[·, ·]]-orthogonal companion is given by

$$H^{\left[\perp\right]} = \left\{ \widehat{h} \in \mathfrak{H}^2 : \left[ \widehat{h}, \widehat{k} \right] = 0 \text{ for all } \widehat{k} \in H \right\}.$$

Hence, it follows from (1.8.3) that the adjoint H<sup>∗</sup> (with respect to the standard inner product) of the relation H in H coincides with the orthogonal companion <sup>H</sup>[[⊥]] (with respect to the indefinite inner product [[·, ·]]) of the subspace <sup>H</sup> in <sup>H</sup>2:

$$H^\* = H^{\left[\perp\right]}.\tag{1.8.5}$$

The indefinite inner product [[·, ·]] on <sup>H</sup><sup>2</sup> provides an appropriate tool to describe certain fundamental notions and identities. A linear subspace H in the space (H2, [[·, ·]]) is said to be


the equivalence in (iii) follows from (1.8.4), (1.8.5), and Lemma 1.4.2. A linear subspace <sup>H</sup> in the space (H2, [[·, ·]]) is maximal nonnegative, maximal nonpositive, or maximal neutral if the existence of a linear subspace with H ⊂ H- , where H- is nonnegative, nonpositive, or neutral, respectively, implies that H-= H.

By considering a relation H as a subspace of H<sup>2</sup> with the usual inner product or as a subspace of <sup>H</sup><sup>2</sup> with the inner product [[·, ·]], the following correspondence is an immediate consequence of (1.8.4) and (1.8.5):


Let U be a linear operator from H<sup>2</sup> to K2. Then U is said to be isometric from (H2, [[·, ·]]) to (K2, [[·, ·]]) if

$$\left[\left[U\widehat{h}, U\widehat{k}\right]\_{\mathcal{H}^2} = \left[\widehat{h}, \widehat{k}\right]\_{\mathcal{H}^2} \quad \text{for all} \quad \widehat{h}, \widehat{k} \in \text{dom}\, U. \tag{1.8.6}$$

In addition, <sup>U</sup> is said to be unitary from (H2, [[·, ·]]) to (K2, [[·, ·]]) if <sup>U</sup> is isometric from (H2, [[·, ·]]) to (K2, [[·, ·]]) and dom <sup>U</sup> <sup>=</sup> <sup>H</sup><sup>2</sup> and ran<sup>U</sup> <sup>=</sup> <sup>K</sup>2.

**Lemma 1.8.1.** Let H and K be Hilbert spaces and let U be an isometric operator from (H2, [[·, ·]]) to (K2, [[·, ·]]). Assume that dom <sup>U</sup> is closed and that <sup>U</sup> is surjective. Then U is bounded.

Proof. To see that the operator U is bounded, it suffices to show that U is closed and to apply the closed graph theorem. Let ( hn) be a sequence in dom U such that

$$
\widehat{h}\_n \to \widehat{h}, \quad U\widehat{h}\_n \to \widehat{\varphi}
$$

for some <sup>h</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>K</sup>2. Since dom <sup>U</sup> is closed, it follows that h ∈ dom U. As <sup>U</sup> is surjective, one can choose for each <sup>ψ</sup> <sup>∈</sup> <sup>K</sup><sup>2</sup> an element <sup>k</sup> <sup>∈</sup> dom <sup>U</sup> such that <sup>U</sup><sup>k</sup> <sup>=</sup> <sup>J</sup>Kψ; here <sup>J</sup><sup>K</sup> is defined in the same way as in (1.8.1) and is a unitary and self-adjoint operator in K2. Then it follows from the identity (1.8.6) and the continuity of [[·, ·]]H<sup>2</sup> that

$$\begin{split} \left( (\widehat{\varphi}, \widehat{\psi})\_{\mathfrak{H}^{2}} = \lim\_{n \to \infty} \left( \widehat{U}\widehat{h}\_{n}, \mathcal{J}\_{\mathfrak{K}}^{-1} \widehat{U}\widehat{k} \right)\_{\mathfrak{H}^{2}} = \lim\_{n \to \infty} \left[ \widehat{U}\widehat{h}\_{n}, \widehat{U}\widehat{k} \right]\_{\mathfrak{K}^{2}} = \lim\_{n \to \infty} \left[ \widehat{h}\_{n}, \widehat{k} \right]\_{\mathfrak{H}^{2}} \\ = \left[ \widehat{h}, \widehat{k} \right]\_{\mathfrak{H}^{2}} = \left[ \widehat{U}\widehat{h}, \widehat{U}\widehat{k} \right]\_{\mathfrak{H}^{2}} = \left[ \widehat{U}\widehat{h}, \mathcal{J}\_{\mathfrak{K}}\widehat{\psi} \right]\_{\mathfrak{H}^{2}} = (\widehat{U}\widehat{h}, \widehat{\psi})\_{\mathfrak{K}^{2}}. \end{split}$$

Since <sup>ψ</sup> <sup>∈</sup> <sup>K</sup><sup>2</sup> is arbitrary, this gives <sup>ϕ</sup> <sup>=</sup> <sup>U</sup> h. It follows that the operator U is closed, and since dom U is closed, one sees that U is bounded. -

**Proposition 1.8.2.** Let H and K be Hilbert spaces and let U be an operator from H<sup>2</sup> to K2. Then the following statements are equivalent:


$$U^\*\mathcal{J}\_\mathfrak{K}U = \mathcal{J}\_\mathfrak{K} \quad \text{and} \quad U\mathcal{J}\_\mathfrak{K}U^\* = \mathcal{J}\_\mathfrak{K};\tag{1.8.7}$$

(iii) <sup>U</sup> <sup>∈</sup> **<sup>B</sup>**(H<sup>2</sup>, <sup>K</sup><sup>2</sup>) is surjective and <sup>U</sup>∗JK<sup>U</sup> <sup>=</sup> <sup>J</sup><sup>H</sup> holds.

Proof. (i) ⇒ (ii) It follows from Lemma 1.8.1 that the operator U is bounded and hence <sup>U</sup> <sup>∈</sup> **<sup>B</sup>**(H<sup>2</sup>, <sup>K</sup><sup>2</sup>). Moreover, (1.8.6) implies

$$\left(U^\*\mathcal{J}\_{\mathfrak{A}}U\widehat{\varphi}, \widehat{\psi}\right)\_{\mathfrak{H}^2} = \left[U\widehat{\varphi}, U\widehat{\psi}\right]\_{\mathfrak{A}^2} = \left[\widehat{\varphi}, \widehat{\psi}\right]\_{\mathfrak{H}^2} = \left(\mathcal{J}\_{\mathfrak{H}}\widehat{\varphi}, \widehat{\psi}\right)\_{\mathfrak{H}^2} \tag{1.8.8}$$

for all ϕ, <sup>ψ</sup> <sup>∈</sup> <sup>H</sup>2, which yields the first identity in (1.8.7). To prove the second identity in (1.8.7), let ϕ, <sup>ψ</sup> <sup>∈</sup> <sup>K</sup><sup>2</sup> and choose <sup>η</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup> such that <sup>ϕ</sup> <sup>=</sup> <sup>J</sup>KUη, which is possible as U is surjective. It follows from the first identity in (1.8.7), and the identities J<sup>H</sup> = J−<sup>1</sup> <sup>H</sup> and J<sup>K</sup> = J−<sup>1</sup> <sup>K</sup> , that

$$\left(U\mathcal{J}\_{\mathfrak{H}}U^\*\widehat{\varphi}, \widehat{\psi}\right)\_{\mathfrak{H}^2} = \left(U\mathcal{J}\_{\mathfrak{H}}U^\*\mathcal{J}\_{\mathfrak{H}}U\widehat{\eta}, \widehat{\psi}\right)\_{\mathfrak{H}^2} = \left(U\widehat{\eta}, \widehat{\psi}\right)\_{\mathfrak{H}^2} = \left(\mathcal{J}\_{\mathfrak{H}}\widehat{\varphi}, \widehat{\psi}\right)\_{\mathfrak{H}^2}.$$

This implies the second identity in (1.8.7).

(ii) ⇒ (iii) The second identity in (1.8.7) yields that U is surjective, and hence (iii) holds.

(iii) ⇒ (i) The identity U∗JKU = J<sup>H</sup> and the reasoning in (1.8.8) show that (1.8.6) holds, and hence <sup>U</sup> is isometric from (H2, [[·, ·]]) to (K2, [[·, ·]]). As <sup>U</sup> is surjective, it follows that <sup>U</sup> is unitary from (H2, [[·, ·]]) to (K2, [[·, ·]]). -

An important feature of operators which are isometric or unitary in the present sense is the way they transform certain classes of subspaces. Let H be a linear subspace of (H2, [[·, ·]]) which is nonnegative, nonpositive, or neutral. If <sup>U</sup> is a linear operator from <sup>H</sup><sup>2</sup> to <sup>K</sup><sup>2</sup> which is isometric from (H2, [[·, ·]]) to (K2, [[·, ·]]), then it follows directly from the definition that U maps H ∩ dom U into a nonnegative, nonpositive, or neutral subspace of (K2, [[·, ·]]), respectively.

**Lemma 1.8.3.** Let <sup>U</sup> be a unitary operator from (H2, [[·, ·]]) to (K2, [[·, ·]]). Then <sup>U</sup> provides a one-to-one correspondence between (maximal) nonnegative, (maximal) nonpositive, (maximal) neutral, and hypermaximal neutral subspaces in (H2, [[·, ·]]) and (K2, [[·, ·]]), respectively.

Proof. Only the statement about hypermaximal neutral subspaces needs attention. For it, one observes that for any subspace H of H<sup>2</sup> one has

$$(UH)^{\left[\perp\right]} = U(H^{\left[\perp\right]}).$$

Thus, H = H[[⊥]] if and only if UH = U(H[[⊥]])=(UH)[[⊥]]. -

Let H and K be Hilbert spaces and let W ∈ **B**(H × H, K × K) have the matrix decomposition

$$\mathcal{W} = \begin{pmatrix} W\_{11} & W\_{12} \\ W\_{21} & W\_{22} \end{pmatrix}. \tag{1.8.9}$$

In much the same way as the scalar M¨obius transform in Definition 1.1.10, the operator W induces the transformation

W : H × H → K × K, {h, h- } → - W11h + W12h- , W21h + W22h- .

The meaning of W, either as a matrix of operators or as a transformation, will be clear from the context.

**Definition 1.8.4.** Let W ∈ **B**(H × H, K × K) have the matrix decomposition as in (1.8.9) and let H be a relation in H. Then the M¨obius transform of H is the relation W[H] in K defined by

$$\mathcal{W}[H] = \left\{ \{W\_{11}h + W\_{12}h', W\_{21}h + W\_{22}h'\} \, : \, \{h, h'\} \in H \right\}.$$

Note that the domain and range of the M¨obius transform are given by

$$\begin{aligned} \text{dom}\,\mathcal{W}[H] &= \{W\_{11}h + W\_{12}h' : \{h, h'\} \in H\}, \\ \text{ran}\,\mathcal{W}[H] &= \{W\_{21}h + W\_{22}h' : \{h, h'\} \in H\}. \end{aligned}$$

Moreover, if G is a further Hilbert space and V ∈ **B**(K × K, G × G), then one has <sup>V</sup>[W[H]] = (<sup>V</sup> ◦ <sup>W</sup>)[H]. For the case where <sup>H</sup> <sup>=</sup> <sup>K</sup> and <sup>W</sup>−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(<sup>H</sup> <sup>×</sup> <sup>H</sup>) it follows that the inverse M¨obius transform exists and is given by the inverse of W. In this case it also follows that

$$\mathcal{W}[H] \text{ is closed if and only if } H \text{ is closed.} \tag{1.8.10}$$

If W in Definition 1.8.4 is unitary with respect to the indefinite inner products [[·, ·]] in <sup>H</sup><sup>2</sup> and <sup>K</sup><sup>2</sup> (see Proposition 1.8.2), then the corresponding M¨obius transform has useful additional geometric properties.

**Theorem 1.8.5.** Let W ∈ **B**(H×H, K×K) have the matrix decomposition in (1.8.9) and assume that W satisfies the identities

$$\mathcal{W}^\* \mathcal{J}\_{\mathfrak{K}} \mathcal{W} = \mathcal{J}\_{\mathfrak{H}} \quad \text{and} \quad \mathcal{W} \mathcal{J}\_{\mathfrak{H}} \mathcal{W}^\* = \mathcal{J}\_{\mathfrak{K}}.\tag{1.8.11}$$

Then W provides a one-to-one correspondence between (maximal) dissipative, (maximal) accumulative, (maximal) symmetric, and self-adjoint relations in H and (maximal) dissipative, (maximal) accumulative, (maximal) symmetric, and self-adjoint relations in K, respectively.

Proof. By Proposition 1.8.2, the operator <sup>W</sup> is unitary from (H2, [[·, ·]]) to (K2, [[·, ·]]). Recall that the notions of (maximal) dissipative, (maximal) accumulative, (maximal) symmetric, and self-adjoint relations correspond to the notions of (maximal) nonnegative, (maximal) nonpositive, (maximal) neutral, and hypermaximal neutral relations, respectively. Therefore, the asserted results follow from Lemma 1.8.3. -

Note that in the case where W ∈ **B**(H×H, K×K) satisfies (1.8.11) the inverse <sup>W</sup>−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(<sup>K</sup> <sup>×</sup> <sup>K</sup>, <sup>H</sup> <sup>×</sup> <sup>H</sup>) is given by

$$\mathcal{W}^{-1} = \begin{pmatrix} W\_{22}^\* & -W\_{12}^\* \\ -W\_{21}^\* & W\_{11}^\* \end{pmatrix}.$$

## **1.9 Convergence of sequences of relations**

This section is devoted to the convergence of sequences of relations in a Hilbert space. There are two notions to be discussed: strong graph convergence and strong resolvent convergence. It will be shown via the uniform boundedness principle that under certain circumstances these notions are equivalent. In particular, the equivalence holds for sequences of self-adjoint or maximal accretive (dissipative) relations.

First recall the following well-known result for bounded linear operators. Let H<sup>n</sup> ∈ **B**(H, K) be a sequence of bounded linear operators and assume that limn→∞ Hnh exists in K for all h ∈ H. An application of the uniform boundedness principle shows that there is a uniform bound: Hn ≤ C for some C ≥ 0. Moreover,

$$H\_{\infty}h = \lim\_{n \to \infty} H\_n h, \quad h \in \mathfrak{H},\tag{1.9.1}$$

defines an operator H<sup>∞</sup> ∈ **B**(H, K) and H∞ ≤ C. A sequence of operators H<sup>n</sup> ∈ **B**(H, K) is said to converge strongly to H<sup>∞</sup> ∈ **B**(H, K) if Hnh → H∞h for all h ∈ H; in this case there is a uniform bound Hn ≤ C for some C ≥ 0. These results will be used frequently in this section. In the special case K = H the limit result (1.9.1) leads to the identity

$$(H\_{\infty}h, h) = \lim\_{n \to \infty} (H\_n h, h), \quad h \in \mathfrak{H}.\tag{1.9.2}$$

Hence, if all H<sup>n</sup> ∈ **B**(H) are self-adjoint (dissipative, accumulative), then (1.9.2) shows H<sup>∞</sup> ∈ **B**(H) is self-adjoint (dissipative, accumulative, respectively).

Also recall the following situation. Let H<sup>n</sup> be a nondecreasing sequence of nonnegative operators in **B**(H) bounded above by H-∈ **B**(H):

$$0 \le (H\_m h, h) \le (H\_n h, h) \le (H' h, h), \quad h \in \mathfrak{H}, \quad n > m. \tag{1.9.3}$$

Then clearly Hn ≤ H- and it follows from the Cauchy–Schwarz inequality for the nonnegative inner product ((H<sup>n</sup> − Hm)·, ·) that

$$\begin{aligned} \| |(H\_n - H\_m)h| |^2 &\le \| H\_n - H\_m \| ((H\_n - H\_m)h, h) \\ &\le 2 \| H' \| ((H\_n - H\_m)h, h) \end{aligned} \tag{1.9.4}$$

for all h ∈ H. Consequently, there exists an operator H<sup>∞</sup> ∈ **B**(H) such that 0 ≤ H<sup>n</sup> ≤ H<sup>∞</sup> ≤ H and Hnh → H∞h for all h ∈ H as n → ∞. Note that a similar observation is valid for a nonincreasing sequence of nonnegative operators in **B**(H).

Now one introduces two notions of convergence for relations from H to K: strong graph convergence and strong resolvent convergence. First one defines the notion of strong graph limit.

**Definition 1.9.1.** Let H<sup>n</sup> be a sequence of relations from H to K. The strong graph limit is the linear relation H<sup>∞</sup> consisting of all {h, h- } ∈ H × K for which there exists a sequence {hn, h- <sup>n</sup>} ∈ H<sup>n</sup> such that {hn, h- n}→{h, h- } in H × K. The sequence H<sup>n</sup> is said to converge to H<sup>∞</sup> in the strong graph sense if H<sup>∞</sup> is the strong graph limit of Hn.

By definition, the strong graph limit H<sup>∞</sup> always exists, it is a uniquely determined relation from H to K, and H<sup>∞</sup> is closed. In fact, let {h, h- } be the limit of {kn, k- <sup>n</sup>} ∈ <sup>H</sup>∞. Then for each <sup>n</sup> <sup>∈</sup> <sup>N</sup> there exist elements {hn, h- <sup>n</sup>} ∈ H<sup>n</sup> with

$$\left| \left| \{ k\_n, k\_n' \} - \{ h\_n, h\_n' \} \right| \right| \le \frac{1}{n}.$$

Clearly, {hn, h- n}→{h, h- } and it follows that {h, h- } ∈ H∞. Thus, H<sup>∞</sup> is closed. A similar argument shows that the strong graph limits of a sequence H<sup>n</sup> and its closures H<sup>n</sup> coincide. Note also that the strong graph limit may coincide with the zero set {0, 0} in H × K. Furthermore, if H<sup>∞</sup> is the strong graph limit of Hn, then (H∞)−<sup>1</sup> is the strong graph limit of (Hn)−1. Finally, note that if <sup>H</sup><sup>∞</sup> <sup>∈</sup> **<sup>B</sup>**(H, <sup>K</sup>) is the strong limit of H<sup>n</sup> ∈ **B**(H, K), then (the graph of) H<sup>∞</sup> is the strong graph limit of H<sup>n</sup>

In general, the strong graph convergence H<sup>n</sup> → H<sup>∞</sup> does not imply the strong graph convergence of the adjoints (Hn)<sup>∗</sup> to (H∞)∗. But there is the following observation.

**Lemma 1.9.2.** Let H<sup>n</sup> and H<sup>∞</sup> be relations from H to K. Assume that H<sup>n</sup> converges to H<sup>∞</sup> in the strong graph sense. Let K be the strong graph limit in K × H of the sequence (Hn)∗. Then

$$K \subset (H\_{\infty})^\*.$$

Proof. Assume that {f,f- } ∈ K. Then there exist {fn, f- <sup>n</sup>} ∈ (Hn)<sup>∗</sup> such that {fn, f- n}→{f,f- }. Now let {h, h- } ∈ H∞, so that there exist {hn, h- <sup>n</sup>} ∈ H<sup>n</sup> such that {hn, h- n}→{h, h- }. In particular, one sees that (f- n, hn)=(fn, h- <sup>n</sup>), which in the limit gives

$$(f',h) = (f,h'), \quad \{h,h'\} \in H\_{\infty}.$$

In other words, {f,f- } ∈ (H∞)<sup>∗</sup> and thus <sup>K</sup> <sup>⊂</sup> (H∞)∗. -

In order to define strong resolvent convergence of a sequence of relations H<sup>n</sup> in H to a relation H<sup>∞</sup> in H the following set is needed:

$$\rho\_{\infty} = \rho(H\_{\infty}) \cap \bigcap\_{n=1}^{\infty} \rho(H\_n),$$

and, whenever it is used, it is tacitly assumed that it is nonempty. Next the notion of strong resolvent limit is defined.

**Definition 1.9.3.** A sequence of closed relations H<sup>n</sup> in H is said to converge to a closed linear relation H<sup>∞</sup> in H in the strong resolvent sense at the point λ ∈ ρ<sup>∞</sup> if for all h ∈ H

$$(H\_n - \lambda)^{-1}h \to (H\_\infty - \lambda)^{-1}h.\tag{1.9.5}$$

In the case of strong resolvent convergence there is also an interplay between the convergence of H<sup>n</sup> and that of (Hn)−<sup>1</sup>. Let the closed relations H<sup>n</sup> converge in the strong resolvent sense to the closed relation H<sup>∞</sup> at the point λ ∈ ρ∞. Then it follows from (1.2.14) that for λ = 0

$$\frac{1}{\lambda} \in \rho\left( (H\_{\infty})^{-1} \right) \cap \bigcap\_{n=1}^{\infty} \rho\left( (H\_n)^{-1} \right),$$

and Corollary 1.1.12 implies that (Hn)−<sup>1</sup> converges to (H∞)−<sup>1</sup> in the strong resolvent sense at 1/λ. Of course, when <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup><sup>∞</sup> and <sup>λ</sup> = 0 the operators (Hn)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) converge strongly to (H∞)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H).

Strong graph convergence and strong resolvent convergence are closely related in the presence of a uniform bound, as described below.

**Theorem 1.9.4.** Let H<sup>n</sup> and H<sup>∞</sup> be closed linear relations in H. Then the following statements hold:

(i) Assume that H<sup>n</sup> converges to H<sup>∞</sup> in the strong resolvent sense at the point λ ∈ ρ∞. Then H<sup>n</sup> converges to H<sup>∞</sup> in the strong graph sense and there exists <sup>C</sup><sup>λ</sup> <sup>&</sup>gt; <sup>0</sup> such that for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>

$$\|(H\_n - \lambda)^{-1}\| \le C\_{\lambda}.\tag{1.9.6}$$

(ii) Assume that H<sup>n</sup> converges to H<sup>∞</sup> in the strong graph sense. Let λ ∈ ρ<sup>∞</sup> be any point for which there exists <sup>C</sup><sup>λ</sup> <sup>&</sup>gt; <sup>0</sup> such that (1.9.6) holds for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>. Then H<sup>n</sup> converges to H<sup>∞</sup> in the strong resolvent sense at the point λ.

Proof. (i) Assume that (1.9.5) holds for some λ ∈ ρ∞. In particular, then one has (H<sup>n</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H). Recall that (1.9.5) implies that the uniform estimate (1.9.6) holds, via the uniform boundedness principle.

Let Γ be the strong graph limit of the sequence Hn. Let {h, h- } ∈ H∞. Then the sequence

$$\left\{ (H\_n - \lambda)^{-1} (h' - \lambda h), (I + \lambda (H\_n - \lambda)^{-1}) (h' - \lambda h) \right\} \in H\_n$$

converges to

$$\left\{ (H\_{\infty} - \lambda)^{-1} (h' - \lambda h), (I + \lambda (H\_{\infty} - \lambda)^{-1}) (h' - \lambda h) \right\} = \{h, h'\}.$$

Hence, {h, h- } ∈ Γ, which shows that H<sup>∞</sup> ⊂ Γ.

Conversely, let {h, h- } ∈ Γ and let {hn, h- <sup>n</sup>} ∈ H<sup>n</sup> be a sequence such that {hn, h- n}→{h, h- }. Then

$$\begin{aligned} \left( (H\_{\infty} - \lambda)^{-1} (h'\_n - \lambda h\_n) - h\_n \right. \\ &= (H\_{\infty} - \lambda)^{-1} (h'\_n - \lambda h\_n) - (H\_n - \lambda)^{-1} (h'\_n - \lambda h\_n) \\ &= \left[ (H\_{\infty} - \lambda)^{-1} - (H\_n - \lambda)^{-1} \right] \left( (h'\_n - \lambda h\_n) - (h' - \lambda h) \right) \\ &+ \left[ (H\_{\infty} - \lambda)^{-1} - (H\_n - \lambda)^{-1} \right] (h' - \lambda h), \end{aligned}$$

and the terms on the right-hand side tend to 0 as n → ∞ due to the pointwise bound (H<sup>n</sup> <sup>−</sup>λ)−<sup>1</sup> ≤ C<sup>λ</sup> and the strong resolvent convergence. Hence, it follows that

$$(H\_{\infty} - \lambda)^{-1}(h' - \lambda h) = h,$$

so that {h, h- } ∈ H∞. This shows that Γ ⊂ H∞.

(ii) Assume that H<sup>∞</sup> is the strong graph limit of the sequence Hn. Let λ ∈ ρ<sup>∞</sup> and let h ∈ H. Then, since λ ∈ ρ(H∞), there is an element {f,f- } ∈ H<sup>∞</sup> with f-− λf = h, so that

$$(H\_{\infty} - \lambda)^{-1}h = (H\_{\infty} - \lambda)^{-1}(f' - \lambda f) = f.$$

Since H<sup>∞</sup> is the strong graph limit of the sequence Hn, there exists a sequence {fn, f- <sup>n</sup>} ∈ H<sup>n</sup> with the property that {fn, f- n}→{f,f- }. Then

$$\begin{aligned} \left( (H\_n - \lambda)^{-1} h - (H\_\infty - \lambda)^{-1} h \right) \\ &= (H\_n - \lambda)^{-1} \left( (f' - \lambda f) - (f'\_n - \lambda f\_n) \right) \\ &+ (H\_n - \lambda)^{-1} (f'\_n - \lambda f\_n) - (H\_\infty - \lambda)^{-1} (f' - \lambda f) \\ &= (H\_n - \lambda)^{-1} \left( (f' - \lambda f) - (f'\_n - \lambda f\_n) \right) + f\_n - f, \end{aligned}$$

and, since for λ ∈ ρ<sup>∞</sup> there is the bound (1.9.6), the right-hand side tends to 0 as <sup>n</sup> → ∞. Hence, <sup>H</sup><sup>n</sup> converges to <sup>H</sup><sup>∞</sup> in the strong resolvent sense at <sup>λ</sup>. -

The following result is a useful consequence of Theorem 1.9.4.

**Corollary 1.9.5.** Let H<sup>n</sup> and H<sup>∞</sup> be closed relations in H and let H<sup>n</sup> satisfy the uniform bound

$$\|(H\_n - \lambda)^{-1}\| \le C\_{\lambda} \tag{1.9.7}$$

for some λ ∈ ρ∞. Assume that the relation H is a restriction of H<sup>∞</sup> which satisfies


Then H is dense in H<sup>∞</sup> and H<sup>n</sup> converges to H<sup>∞</sup> in the strong graph sense or, equivalently, in the strong resolvent sense at λ ∈ ρ∞.

Proof. Let {h, h- } ∈ H ⊂ H<sup>∞</sup> and {h, h- <sup>n</sup>} ∈ H<sup>n</sup> such that h- <sup>n</sup> → h- . Then for λ ∈ ρ<sup>∞</sup> one has

$$\{h' - \lambda h, h\} \in (H\_{\infty} - \lambda)^{-1} \quad \text{and} \quad \{h'\_n - \lambda h, h\} \in (H\_n - \lambda)^{-1},$$

so that

$$(H\_{\infty} - \lambda)^{-1}(h' - \lambda h) = (H\_n - \lambda)^{-1}(h'\_n - \lambda h).$$

Consequently,

$$\begin{aligned} (H\_n - \lambda)^{-1} (h' - \lambda h) - (H\_\infty - \lambda)^{-1} (h' - \lambda h) \\ = (H\_n - \lambda)^{-1} (h' - \lambda h) - (H\_n - \lambda)^{-1} (h'\_n - \lambda h) \\ = (H\_n - \lambda)^{-1} (h' - h'\_n), \end{aligned}$$

and therefore, by the uniform bound,

$$\|\|(H\_n - \lambda)^{-1}(h' - \lambda h) - (H\_\infty - \lambda)^{-1}(h' - \lambda h)\|\| \le C\_\lambda \|h' - h'\_n\|\|$$

for all {h, h- } ∈ H. Since ran (H −λ) is dense in H, it follows from (1.9.7) that H<sup>n</sup> converges to H<sup>∞</sup> in the strong resolvent sense at λ ∈ ρ∞, and hence also in the strong graph sense.

It remains to show that H is dense in H∞. Observe that H ⊂ H<sup>∞</sup> implies (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>⊂</sup> (H<sup>∞</sup> <sup>−</sup> <sup>λ</sup>)−1. Since <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup><sup>∞</sup> and ran (<sup>H</sup> <sup>−</sup> <sup>λ</sup>) is dense in <sup>H</sup>, it follows that (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a densely defined bounded operator in <sup>H</sup>. Thus, its closure coincides with (H<sup>∞</sup> <sup>−</sup> <sup>λ</sup>)−1, which gives <sup>H</sup> <sup>=</sup> <sup>H</sup>∞. -

Let H<sup>n</sup> and H<sup>∞</sup> be closed relations in H. When all these relations are selfadjoint or maximal dissipative (accumulative), then there is automatically a uniform bound of the form (1.9.6).

**Corollary 1.9.6.** Let H<sup>n</sup> and H<sup>∞</sup> be relations in H. Then the following statements hold:


Proof. The proof follows from Theorem 1.9.4 when one recalls that in case (i) one has

$$\|(H\_n - \lambda)^{-1}\| \le \frac{1}{|\mathrm{Im}\,\lambda|}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

while in case (ii) one has in addition

$$\|(H\_n - \lambda)^{-1}\| \le \frac{1}{\gamma - \lambda}, \qquad \lambda < \gamma;$$

cf. Proposition 1.4.4 and Proposition 1.4.6. In case (iii) with maximal dissipative relations H<sup>n</sup> one has

$$\|(H\_n - \lambda)^{-1}\| \le \frac{1}{-\text{Im}\,\lambda}, \qquad \lambda \in \mathbb{C}^-;$$

cf. Proposition 1.6.3. The case of maximal accumulative relations is analogous. -

In the definition of convergence in strong resolvent sense the limit relation is included. There are situations when the limit is not known beforehand.

**Theorem 1.9.7.** Let H<sup>n</sup> be a sequence of closed relations. Let

$$\mathcal{E} \subset \bigcap\_{n=1}^{\infty} \rho(H\_n) \tag{1.9.8}$$

be a nonempty set such that for all <sup>λ</sup> <sup>∈</sup> <sup>E</sup> and all <sup>h</sup> <sup>∈</sup> <sup>H</sup> the sequence (H<sup>n</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>h</sup> converges. Then there is a closed relation H<sup>∞</sup> with E ⊂ ρ(H∞) such that H<sup>n</sup> converges to H<sup>∞</sup> in the strong resolvent sense for each λ ∈ E.

Proof. Let <sup>λ</sup> <sup>∈</sup> <sup>E</sup>, so that the sequence (H<sup>n</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>h</sup> converges for all <sup>h</sup> <sup>∈</sup> <sup>H</sup>. Since (H<sup>n</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H), it follows from (1.9.1) (with <sup>H</sup><sup>n</sup> in (1.9.1) replaced by (H<sup>n</sup> <sup>−</sup> <sup>λ</sup>)−1) that there exists an operator <sup>B</sup>(λ) <sup>∈</sup> **<sup>B</sup>**(H) such that for all <sup>h</sup> <sup>∈</sup> <sup>H</sup>

$$(H\_n - \lambda)^{-1}h \to B(\lambda)h.$$

Define the relation H∞(λ) by

$$H\_{\infty}(\lambda) = B(\lambda)^{-1} + \lambda.$$

Then <sup>H</sup>∞(λ) is closed, <sup>B</sup>(λ)=(H∞(λ) <sup>−</sup> <sup>λ</sup>)−1, and <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(H∞(λ)). In other words, H<sup>n</sup> in H converges to H∞(λ) in H in the strong resolvent sense at the point λ ∈ E. By Theorem 1.9.4, H∞(λ) is the strong graph limit of Hn. Hence, H∞(λ) is independent of the choice of <sup>λ</sup> <sup>∈</sup> <sup>E</sup>. -

Theorem 1.9.7 gives rise to the following weakening of Corollary 1.9.6.

**Corollary 1.9.8.** Let H<sup>n</sup> be a sequence of relations in H. Then the following statements hold:


Proof. Let H<sup>n</sup> be any sequence of closed relations in H such that for all h ∈ H the sequence (H<sup>n</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>h</sup> converges for each <sup>λ</sup> <sup>∈</sup> <sup>E</sup>, where <sup>E</sup> is a nonempty set satisfying (1.9.8). According to Theorem 1.9.7, there is a closed relation H∞, which is the limit of H<sup>n</sup> in the strong resolvent sense on E, such that

$$\text{ran}\,(H\_{\infty}-\lambda)=\mathfrak{H},\quad\lambda\in\mathfrak{E}.\tag{1.9.9}$$

If there is a uniform bound as in (1.9.6), then Theorem 1.9.4 implies that the limit H<sup>∞</sup> is also the strong graph limit of Hn. Hence, every {h, h- } ∈ H<sup>∞</sup> can be approximated by {hn, h- <sup>n</sup>} ∈ Hn, which implies that

$$(h',h) = \lim\_{n \to \infty} (h'\_n, h\_n),\tag{1.9.10}$$

and thus also

$$\operatorname{Im}\left(h',h\right) = \lim\_{n \to \infty} \operatorname{Im}\left(h'\_n, h\_n\right). \tag{1.9.11}$$

(i) Assume that all <sup>H</sup><sup>n</sup> are self-adjoint. Then the set <sup>E</sup> <sup>=</sup> {λ+, λ−} with some <sup>λ</sup><sup>±</sup> <sup>∈</sup> <sup>C</sup><sup>±</sup> satisfies (1.9.8) and since all <sup>H</sup><sup>n</sup> are symmetric, it follows from (1.9.11) that the closed relation <sup>H</sup><sup>∞</sup> is symmetric. Hence, (1.9.9) with <sup>E</sup> <sup>=</sup> {λ+, λ−} shows that H<sup>∞</sup> is self-adjoint; cf. Theorem 1.5.5. Due to Corollary 1.9.6, one sees that <sup>H</sup><sup>n</sup> converges to <sup>H</sup><sup>∞</sup> in the strong resolvent sense on <sup>C</sup> \ <sup>R</sup>.

(ii) Assume that all H<sup>n</sup> are semibounded and self-adjoint with common lower bound <sup>γ</sup>. Then the set <sup>E</sup> <sup>=</sup> {λ} with some <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [γ, <sup>∞</sup>) satisfies (1.9.8) and, since (h- <sup>n</sup>, hn) ≥ γ(hn, hn) for {hn, hn-} ∈ Hn, it follows from (1.9.10) that the closed relation H<sup>∞</sup> is bounded below with lower bound γ. Hence, (1.9.9) shows that H<sup>∞</sup> is self-adjoint; cf. Theorem 1.5.5. In view of Corollary 1.9.6, H<sup>n</sup> converges to <sup>H</sup><sup>∞</sup> in the strong resolvent sense on <sup>C</sup> \ [γ, <sup>∞</sup>).

(iii) Assume that all H<sup>n</sup> are maximal dissipative. Then the set E = {λ} with some <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> satisfies (1.9.8) and since all <sup>H</sup><sup>n</sup> are dissipative it follows from (1.9.11) that the closed relation H<sup>∞</sup> is dissipative. Hence, (1.9.9) shows that H<sup>∞</sup> is maximal dissipative; cf. Theorem 1.6.4. Due to Corollary 1.9.6, one sees that <sup>H</sup><sup>n</sup> converges to <sup>H</sup><sup>∞</sup> in the strong resolvent sense on <sup>C</sup>−. The case where <sup>H</sup><sup>n</sup> is maximal accumulative is treated analogously. -

The following result is an illustration of the methods involving convergence in the graph sense and in the resolvent sense. In Chapter 5 this result will be used extensively.

**Proposition 1.9.9.** Let H<sup>n</sup> be a sequence of semibounded self-adjoint relations with common lower bound γ in H. Assume that for m>n and some λ<γ

$$0 \le (H\_m - \lambda)^{-1} \le (H\_n - \lambda)^{-1}.\tag{1.9.12}$$

Then there exists a semibounded self-adjoint relation H<sup>∞</sup> with lower bound γ such that <sup>H</sup><sup>n</sup> converges to <sup>H</sup><sup>∞</sup> in the strong resolvent sense on <sup>C</sup> \ [γ, <sup>∞</sup>), and

$$0 \le (H\_{\infty} - \lambda)^{-1} \le (H\_n - \lambda)^{-1}.\tag{1.9.13}$$

Proof. Let λ<γ and let h ∈ H. By (1.9.12),

$$0 \le \left( \left( H\_m - \lambda \right)^{-1} h, h \right) \le \left( \left( H\_n - \lambda \right)^{-1} h, h \right).$$

for m>n and now it follows in the same way as in (1.9.3)–(1.9.4) that the sequence (H<sup>n</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>h</sup> converges for <sup>h</sup> <sup>∈</sup> <sup>H</sup>. Then by Corollary 1.9.8 there is a self-adjoint relation H<sup>∞</sup> bounded below by γ, such that H<sup>n</sup> converges to H<sup>∞</sup> in the strong resolvent sense on <sup>C</sup> \ [γ, <sup>∞</sup>). It follows from (1.9.12) that (1.9.13) holds. -

Before moving to a corollary of Proposition 1.9.9, recall the following simple antitonicity result. Let A, B ∈ **B**(H) satisfy 0 ≤ A ≤ B and let A be boundedly invertible. Then <sup>B</sup> is boundedly invertible, 0 <sup>≤</sup> <sup>B</sup>−1, and <sup>B</sup>−<sup>1</sup> <sup>≤</sup> <sup>A</sup>−1. To see the last inequality, note that (A·, ·) is a nonnegative semi-inner product, thus one obtains for any ϕ, ψ ∈ H:

$$|(A\varphi,\psi)|^2 \le (A\varphi,\varphi)(A\psi,\psi) \le (A\varphi,\varphi)(B\psi,\psi).$$

Let <sup>h</sup> <sup>∈</sup> <sup>H</sup> and choose <sup>ϕ</sup> <sup>=</sup> <sup>A</sup>−1<sup>h</sup> and <sup>ψ</sup> <sup>=</sup> <sup>B</sup>−1h. Then this inequality leads to <sup>0</sup> <sup>≤</sup> <sup>B</sup>−<sup>1</sup> <sup>≤</sup> <sup>A</sup>−1.

The following corollary deals with the situation from the beginning of this section. However, now the nondecreasing sequence of self-adjoint operators in **B**(H) does not necessarily have an upper bound.

**Corollary 1.9.10.** Let H<sup>n</sup> ∈ **B**(H) be a sequence of self-adjoint operators which is nondecreasing, i.e., for all h ∈ H

$$(H\_n h, h) \le (H\_m h, h), \quad n < m,\tag{1.9.14}$$

and let <sup>γ</sup> <sup>∈</sup> <sup>R</sup> be the lower bound of <sup>H</sup>1. Then there exists a semibounded selfadjoint relation H∞, bounded below by γ, such that H<sup>n</sup> converges to H<sup>∞</sup> in the strong resolvent sense on <sup>C</sup> \ [γ, <sup>∞</sup>), and

$$0 \le (H\_{\infty} - \lambda)^{-1} \le (H\_n - \lambda)^{-1}, \quad \lambda < \gamma. \tag{1.9.15}$$

Proof. Let <sup>γ</sup> <sup>∈</sup> <sup>R</sup> be the lower bound for <sup>H</sup>1. Then <sup>γ</sup> is a common lower bound for all Hn, i.e., γ(h, h) ≤ (Hnh, h) for all h ∈ H. Furthermore, (1.9.14) gives for n<m:

$$0 \le ((H\_n - \lambda)h, h) \le ((H\_m - \lambda)h, h), \quad \lambda < \gamma.$$

Since <sup>H</sup><sup>n</sup> <sup>−</sup> <sup>λ</sup> with λ<γ is boundedly invertible for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>, this implies by antitonicity that (1.9.12) holds. Thus, (1.9.15) follows from Proposition 1.9.9. -

## **1.10 Parametric representations for relations**

The discussion in this section is centered on the question when a relation from H to K can be seen as the range of a bounded column operator or as the kernel of a bounded row operator. The results will be used in the description of boundary value problems in Chapter 2.

Let H, K, and E be Hilbert spaces and let A ∈ **B**(E, H), B ∈ **B**(E, K). Then H defined by

$$H = \left\{ \{ \mathcal{A}e, \mathcal{B}e \} : e \in \mathfrak{E} \right\} \tag{1.10.1}$$

is a relation from H to K. The representation of the relation H in (1.10.1) is called a parametric representation and is denoted by H = {A, B}. It is sometimes convenient to rewrite (1.10.1) as

$$H = \text{ran}\begin{pmatrix} \mathcal{A} \\ \mathcal{B} \end{pmatrix},\tag{1.10.2}$$

that is, H is the range of the corresponding bounded column operator from E to H×K. Not all relations from H to K can be represented in the form (1.10.1); below the ones that do will be characterized.

An interesting feature of parametric representations is how they show up in adjoints. Namely, if H is given by (1.10.1) or, equivalently, by (1.10.2), then the adjoint H<sup>∗</sup> of H satisfies

$$H^\* = \left\{ \{f, f'\} \in \mathfrak{K} \times \mathfrak{H} : \mathcal{B}^\* f = \mathcal{A}^\* f' \right\},\tag{1.10.3}$$

or, equivalently,

$$H^\* = \ker\left(\mathcal{B}^\* \, \, -\mathcal{A}^\*\right),$$

that is, H<sup>∗</sup> can be written as the kernel of the corresponding bounded row operator from K × H to E.

The following theorem uses the notion of operator range. An operator range R in a Hilbert space X is defined as the range of a bounded everywhere defined operator from some Hilbert space Y to X.

**Theorem 1.10.1.** A relation H from H to K is of the form (1.10.1) with A ∈ **B**(E, H) and B ∈ **B**(E, K) if and only if H is an operator range. In particular, every closed relation H from H to K is of the form (1.10.1).

Proof. Let H be given by (1.10.1) and define the column operator R by

$$R = \begin{pmatrix} \mathcal{A} \\ \mathcal{B} \end{pmatrix} : \mathfrak{E} \to \begin{pmatrix} \mathfrak{H} \\ \mathfrak{A} \end{pmatrix},$$

where the column on the right stands for the Hilbert space H×K. Then R belongs to **B**(E, H×K) and clearly ran R coincides with (the graph of) the relation H; i.e., H is an operator range in H × K.

Conversely, assume that H is an operator range in H × K, so that (the graph of) H coincides with ran R for some R ∈ **B**(E, H × K). Let P<sup>H</sup> and P<sup>K</sup> be the orthogonal projections from H × K onto H and K, respectively. Then

$$\mathcal{A} = P\_{\mathfrak{H}} R \quad \text{and} \quad \mathcal{B} = P\_{\mathfrak{K}} R$$

define a pair of bounded operators A ∈ **B**(E, H) and B ∈ **B**(E, K) such that

$$H = \left\{ Rf : f \in \mathfrak{E} \right\} = \left\{ \left\{ \mathcal{A}f, \mathcal{B}f \right\} : f \in \mathfrak{E} \right\}.$$

Hence, H has the form (1.10.1).

Finally, every closed relation H from H to K is of the form (1.10.1), since it coincides with the range of the orthogonal projection from H × K onto (the closed graph of) H. -

In the general operator representation (1.10.1) there clearly exists some redundancy: the closed linear subspace

$$\ker \begin{pmatrix} \mathcal{A} \\ \mathcal{B} \end{pmatrix} = \ker \mathcal{A} \cap \ker \mathcal{B} \subset \mathfrak{E}$$

does not contribute to H. Thus, one can restrict the operators A and B to the orthogonal complement of ker A∩ker B in E. The representing pair H = {A, B} is called tight if ker A ∩ ker B = {0}. All tight representations H = {A, B} are easily characterized.

**Lemma 1.10.2.** Let H<sup>j</sup> , j = 1, 2, be relations from H to K. Assume that the representations

$$H\_j = \left\{ \{ \mathcal{A}\_j e, \mathcal{B}\_j e \} : e \in \mathfrak{E}\_j \right\}, \quad j = 1, 2, 3$$

where A<sup>j</sup> ∈ **B**(E<sup>j</sup> , H), B<sup>j</sup> ∈ **B**(E<sup>j</sup> , K), and E<sup>j</sup> are Hilbert spaces, are tight. Then the equality H<sup>1</sup> = H<sup>2</sup> holds if and only if there exists a bounded bijective operator X ∈ **B**(E1, E2) such that

$$\mathcal{A}\_1 = \mathcal{A}\_2 X, \quad \mathcal{B}\_1 = \mathcal{B}\_2 X.$$

Proof. Since the representations of H<sup>j</sup> , j = 1, 2, are tight, one has

$$\left(\overline{\operatorname{\mathbf{var}}}\begin{pmatrix}\mathcal{A}\_{1}\\\mathcal{B}\_{1}\end{pmatrix}^{\*} = \left(\ker\begin{pmatrix}\mathcal{A}\_{1}\\\mathcal{B}\_{1}\end{pmatrix}\right)^{\perp} = \mathfrak{E}\_{1} \quad \text{and} \quad \overline{\operatorname{\mathbf{var}}}\begin{pmatrix}\mathcal{A}\_{2}\\\mathcal{B}\_{2}\end{pmatrix}^{\*} = \left(\ker\begin{pmatrix}\mathcal{A}\_{2}\\\mathcal{B}\_{2}\end{pmatrix}\right)^{\perp} = \mathfrak{E}\_{2}.$$

Now assume that H<sup>1</sup> = H2, so that

$$\text{ran}\begin{pmatrix} \mathcal{A}\_1\\ \mathcal{B}\_1 \end{pmatrix} = \text{ran}\begin{pmatrix} \mathcal{A}\_2\\ \mathcal{B}\_2 \end{pmatrix}.$$

Then Corollary D.4 and the discussion preceding it show that there exists a boundedly invertible operator X ∈ **B**(E1, E2) such that

$$
\begin{pmatrix} \mathcal{A}\_1 \\ \mathcal{B}\_1 \end{pmatrix} = \begin{pmatrix} \mathcal{A}\_2 \\ \mathcal{B}\_2 \end{pmatrix} X.
$$

The converse is clear. -

The question comes up when relations of the form (1.10.1) are closed. The following result gives a necessary and sufficient condition.

**Proposition 1.10.3.** Let H be a relation from H to K of the form (1.10.1) with A ∈ **B**(E, H) and B ∈ **B**(E, K). Then H is closed if and only if

$$\mathfrak{E}' = \text{ran}\left(\mathcal{A}^\*\mathcal{A} + \mathcal{B}^\*\mathcal{B}\right),$$

is closed in E. In this case there exists a tight representation {A- , B- } of H, where A- ∈ **B**(E- , H), B- ∈ **B**(E- , K), such that

$$(\mathcal{A}')^\* \mathcal{A}' + (\mathcal{B}')^\* \mathcal{B}' = I\_{\mathfrak{E}'}.$$

Proof. The identity

$$\text{ran}\begin{pmatrix} \mathcal{A} \\ \mathcal{B} \end{pmatrix}^\* \begin{pmatrix} \mathcal{A} \\ \mathcal{B} \end{pmatrix} = \text{ran}\left(\mathcal{A}^\* \mathcal{A} + \mathcal{B}^\* \mathcal{B}\right),$$

together with Lemma D.1 and Lemma D.2 shows that ran (A∗A + B∗B) is closed if and only if

$$H = \text{ran}\begin{pmatrix} \mathcal{A} \\ \mathcal{B} \end{pmatrix}$$

is closed. Now assume that H is closed or, equivalently, that ran (A∗A + B∗B) is closed. Since A∗A + B∗B is self-adjoint the space E has the orthogonal decomposition

$$\mathfrak{E} = \text{ran}\left(\mathcal{A}^\*\mathcal{A} + \mathcal{B}^\*\mathcal{B}\right) \oplus \ker\left(\mathcal{A}^\*\mathcal{A} + \mathcal{B}^\*\mathcal{B}\right).$$

It follows from the identity

$$\ker\left(\mathcal{A}^\*\mathcal{A} + \mathcal{B}^\*\mathcal{B}\right) = \ker\begin{pmatrix} \mathcal{A} \\ \mathcal{B} \end{pmatrix}^\* \begin{pmatrix} \mathcal{A} \\ \mathcal{B} \end{pmatrix} = \ker\begin{pmatrix} \mathcal{A} \\ \mathcal{B} \end{pmatrix},$$

that the restrictions A<sup>0</sup> and B<sup>0</sup> of A and B to E- = ran (A∗A+ B∗B) form a tight representation of H. Moreover, it can be seen that A<sup>∗</sup> <sup>0</sup>A<sup>0</sup> + B<sup>∗</sup> <sup>0</sup>B<sup>0</sup> coincides with the restriction of A∗A + B∗B onto E and hence it follows that A<sup>∗</sup> <sup>0</sup>A<sup>0</sup> + B<sup>∗</sup> <sup>0</sup>B<sup>0</sup> is a bounded bijective nonnegative operator in E- . Now define

$$X = (\mathcal{A}\_0^\* \mathcal{A}\_0 + \mathcal{B}\_0^\* \mathcal{B}\_0)^{-\frac{1}{2}} \in \mathbf{B}(\mathfrak{E}'),$$

and set A- = A0X ∈ **B**(E- , H), B- = B0X ∈ **B**(E- , K). Then it follows that H can be represented in the form

$$H = \left\{ \{ \mathcal{A}'e, \mathcal{B}'e \} : e \in \mathfrak{E}' \right\},$$

where the pair {A- , B- } is normalized by

$$(\mathcal{A}')^\* \mathcal{A}' + (\mathcal{B}')^\* \mathcal{B}' = X^\* \left( \mathcal{A}\_0^\* \mathcal{A}\_0 + \mathcal{B}\_0^\* \mathcal{B}\_0 \right) X = I\_{\mathfrak{E}'}.$$

Since the representing pair H = {A0, B0} is tight, so is the representing pair H = {A- , B- }. -

A direct by-product of Theorem 1.10.1 is the following representation of a relation in terms of the kernel of a bounded row operator.

**Proposition 1.10.4.** A relation H from H to K is of the form

$$H = \left\{ \{f, f'\} \in \mathfrak{H} \times \mathfrak{K} : \mathfrak{M}f = \mathfrak{N}f' \right\},\tag{1.10.4}$$

where M ∈ **B**(H, F), N ∈ **B**(K, F), and F is a Hilbert space if and only if H is closed. In this case the Hilbert space F can be chosen such that

$$\mathfrak{F} = \overline{\text{span}}\left\{ \overline{\text{ran}}\,\mathcal{M}, \overline{\text{ran}}\,\mathcal{N} \right\}, \tag{1.10.5}$$

where M and N are uniquely determined up to left-multiplication by a bounded bijective operator.

Proof. Note first that for any relation H from H to K the adjoint H<sup>∗</sup> is a closed relation from K to H. Hence, by Theorem 1.10.1, there exist a Hilbert space F and a pair C ∈ **B**(F, K) and D ∈ **B**(F, H), such that

$$H^\* = \left\{ \{\mathbb{C}e, \mathcal{D}e\} : e \in \mathfrak{F} \right\}.$$

Then it follows from (1.10.3) that

$$H^{\*\*} = \left\{ \{f, f'\} \in \mathfrak{H} \times \mathfrak{K} : \mathcal{D}^\* f = \mathbb{C}^\* f' \right\}.$$

Now assume that H is a closed relation from H to K. Then H = H∗∗ and hence (1.10.4) is valid with M = D<sup>∗</sup> ∈ **B**(H, F) and N = C<sup>∗</sup> ∈ **B**(K, F). For the converse assume that H has the form (1.10.4), where M ∈ **B**(H, F) and N ∈ **B**(K, F). Then it follows directly that H is closed.

If H is given by (1.10.4), then it follows that H<sup>∗</sup> has the representation

$$H^\* = \left\{ \{ \mathfrak{N}^\* e, \mathcal{M}^\* e \} : e \in \mathfrak{F} \right\}.$$

with N<sup>∗</sup> ∈ **B**(F, K) and M<sup>∗</sup> ∈ **B**(F, H). By Proposition 1.10.3, this representation can be assumed to be tight. Then one has

$$\{0\} = \ker \begin{pmatrix} \mathcal{N}^\* \\ \mathcal{M}^\* \end{pmatrix} = \left( \text{ran} \begin{pmatrix} \mathcal{N}^\* \\ \mathcal{M}^\* \end{pmatrix} \right)^\perp = \left( \text{ran} \begin{pmatrix} \mathcal{N} & \mathcal{M} \end{pmatrix} \right)^\perp,$$

which gives (1.10.5).

Likewise, assume that H is given by

$$H = \left\{ \{f, f'\} \in \mathfrak{H} \times \mathfrak{R} : \mathcal{M}\_1 f = \mathfrak{N}\_1 f' \right\},$$

where F<sup>1</sup> is a Hilbert space, M<sup>1</sup> ∈ **B**(H, F1), and N<sup>1</sup> ∈ **B**(K, F1), and that the condition F<sup>1</sup> = span {ran M1,ran N1} holds. Then H<sup>∗</sup> also has the following tight representation

$$H^\* = \left\{ \left\{ (\mathfrak{N}\_1)^\* e, (\mathfrak{M}\_1)^\* e \right\} : e \in \mathfrak{F}\_1 \right\}.$$

By Lemma 1.10.2, there exists a bounded bijective operator X ∈ **B**(F1, F) such that

$$(\mathfrak{N}\_1)^\* = (\mathfrak{M})^\* X, \quad (\mathfrak{N}\_1)^\* = (\mathfrak{N})^\* X,$$

or

$$
\mathcal{M}\_1 = X^\* \mathcal{M}, \quad \mathcal{N}\_1 = X^\* \mathcal{N},
$$

with <sup>X</sup><sup>∗</sup> <sup>∈</sup> **<sup>B</sup>**(F, <sup>F</sup>1) is bijective. This completes the proof. -

Let H be a closed relation from H to K. Then it has a representation as in (1.10.1) and a representation as in (1.10.4). The interest is now in explicitly connecting these representations. The first main result concerns the case when the resolvent set of the relation is nonempty.

**Theorem 1.10.5.** The relation H in H is closed with μ ∈ ρ(H) if and only if H has a representation

$$H = \left\{ \{ \mathcal{A}e, \mathcal{B}e \} : e \in \mathfrak{H} \right\} \tag{1.10.6}$$

with <sup>A</sup>, <sup>B</sup> <sup>∈</sup> **<sup>B</sup>**(H), such that (B<sup>−</sup> <sup>μ</sup>A)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H). This representation is automatically tight. Moreover, in this case the pair {A, B} may be chosen such that H<sup>∗</sup> has the tight representation

$$H^\* = \left\{ \{ \mathcal{A}^\* e, \mathcal{B}^\* e \} : e \in \mathfrak{H} \right\},\tag{1.10.7}$$

so that H can also be written as

$$H = \left\{ \{f, f'\} \in \mathfrak{H} \times \mathfrak{H} : \mathcal{B}f = \mathcal{A}f' \right\}.\tag{1.10.8}$$

Proof. Let <sup>H</sup> be the relation in (1.10.6) and assume that (<sup>B</sup> <sup>−</sup> <sup>μ</sup>A)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H). Then it is clear that

$$\mathcal{A}(H-\mu)^{-1} = \left\{ \{ (\mathcal{B}-\mu\mathcal{A})e, \mathcal{A}e \} : e \in \mathfrak{H} \right\} = \mathcal{A}(\mathcal{B}-\mu\mathcal{A})^{-1},$$

which implies that μ ∈ ρ(H) and that H is closed. The representation is tight, since Ae = 0 and Be = 0 imply (B − μA)e = 0, and hence e = 0.

Conversely, let H be closed and assume that μ ∈ ρ(H). Then, by Lemma 1.2.4, H has the representation

$$H = \left\{ \left\{ (H - \mu)^{-1} f, (I + \mu (H - \mu)^{-1}) f \right\} : f \in \mathfrak{H} \right\}.$$

Hence, one gets (1.10.6) by taking <sup>A</sup> = (<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup> and <sup>B</sup> <sup>=</sup> <sup>I</sup> <sup>+</sup> <sup>μ</sup>(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−1, in which case B − μA = I is boundedly invertible. Since μ ∈ ρ(H∗), one also has, by Lemma 1.2.4,

$$H^\* = \left\{ \left\{ (H^\* - \overline{\mu})^{-1} g, (I + \overline{\mu}(H - \overline{\mu})^{-1}) g \right\} : g \in \mathfrak{H} \right\},$$

which then leads to (1.10.7). It also follows that this representation is tight. The assertion (1.10.8) follows from (1.10.7), (1.10.3), and H = H∗∗. -

Note that a possible choice for (1.10.6) and (1.10.7) (and hence also (1.10.8)) to hold is given by

$$\mathcal{A} = (H - \mu)^{-1} \quad \text{and} \quad \mathcal{B} = I + \mu (H - \mu)^{-1}, \qquad \mu \in \rho(H). \tag{1.10.9}$$

In the next statement, starting from an arbitrary representing pair {A, B} for H in (1.10.6) a representing pair {X−∗A∗, X−∗B∗} for H<sup>∗</sup> as in (1.10.7) is obtained. In fact, Corollary 1.10.6 is an immediate consequence of (1.10.9), Lemma 1.10.2 and Theorem 1.10.5.

**Corollary 1.10.6.** Let H be a closed relation in H with μ ∈ ρ(H) given in the form (1.10.6) with <sup>A</sup>, <sup>B</sup> <sup>∈</sup> **<sup>B</sup>**(H), such that (<sup>B</sup> <sup>−</sup> <sup>μ</sup>A)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H). Then

$$\mathcal{A} = (H - \mu)^{-1}X, \quad \mathcal{B} = \left(I + \mu(H - \mu)^{-1}\right)X,$$

for some bijective X ∈ **B**(H), and the pair {X−∗A∗, X−∗B∗} represents the adjoint H<sup>∗</sup> as in (1.10.7). In particular, H is given by

$$H = \left\{ \{f, f'\} \in \mathfrak{H} \; ; \; \mathcal{B}X^{-1}f = \mathcal{A}X^{-1}f' \right\}.$$

For a given representation H = {A, B} as in (1.10.6) and some bijective operator X ∈ **B**(H), Lemma 1.10.2 shows that also

$$\mathcal{A}' = \mathcal{A}X, \quad \mathcal{B}' = \mathcal{B}X,$$

is a tight representation of H. With {A, B} in (1.10.9) and X = μ − μ, where <sup>μ</sup> <sup>∈</sup> <sup>ρ</sup>(H) <sup>∩</sup> <sup>C</sup> \ <sup>R</sup>, one gets the following representing pair {A- , B- } for H in terms of the Cayley transform in Definition 1.1.13:

$$\begin{aligned} \mathcal{A}' &= (\overline{\mu} - \mu)(H - \mu)^{-1} = I - \mathcal{C}\_{\mu}[H], \\ \mathcal{B}' &= (\overline{\mu} - \mu) \left( I + \mu (H - \mu)^{-1} \right) = \overline{\mu} - \mu \mathcal{C}\_{\mu}[H]. \end{aligned} \tag{1.10.10}$$

The next proposition is also closely related to Theorem 1.10.5.

**Proposition 1.10.7.** Let H be a closed relation in H of the form

$$H = \left\{ \{f, f'\} \in \mathfrak{H} \times \mathfrak{H} : \mathfrak{M}f = \mathfrak{N}f' \right\},\tag{1.10.11}$$

where F is a Hilbert space and M, N ∈ **B**(H, F), and assume that (1.10.5) is satisfied. Then μ ∈ ρ(H) if and only if M − μN ∈ **B**(H, F) is bijective. In this case H has the parametrization (1.10.6) with

$$\{\mathcal{A}, \mathcal{B}\} = \left\{ (\mathcal{M} - \mu \mathcal{N})^{-1} \mathcal{N}, (\mathcal{M} - \mu \mathcal{N})^{-1} \mathcal{M} \right\}. \tag{1.10.12}$$

Proof. Assume that μ ∈ ρ(H), so that also μ ∈ ρ(H∗) and hence one has the parametrization <sup>H</sup><sup>∗</sup> <sup>=</sup> {(H<sup>∗</sup> <sup>−</sup> <sup>μ</sup>)−1, I <sup>+</sup> <sup>μ</sup>(H<sup>∗</sup> <sup>−</sup> <sup>μ</sup>)−1}. Define the relation <sup>K</sup> in H by

$$K = \left\{ \left\{ \mathfrak{N}^\* e, \mathfrak{M}^\* e \right\} : e \in \mathfrak{F} \right\}.$$

Then it follows from (1.10.3) that H = K∗. Furthermore, one sees that the representation of K = H<sup>∗</sup> is tight due to (1.10.5). Hence, there exists a bijective operator X ∈ **B**(F, H) such that

$$\mathcal{N}^\* = (H^\* - \overline{\mu})^{-1} X \quad \text{and} \quad \mathcal{M}^\* = \left( I + \overline{\mu} (H^\* - \overline{\mu})^{-1} \right) X,$$

and therefore

$$\mathcal{M} = X^\* \left( I + \mu (H - \mu)^{-1} \right) \quad \text{and} \quad \mathcal{N} = X^\* (H - \mu)^{-1}. \tag{1.10.13}$$

It follows that

$$\mathcal{M} - \mu \mathcal{N} = X^\*,\tag{1.10.14}$$

and hence M − μN ∈ **B**(H, F) is bijective.

Conversely, assume that M − μN ∈ **B**(H, F) is bijective. It follows from (1.10.11) that

$$H - \mu = \left\{ \{f, f' - \mu f\} \in \mathfrak{H} \times \mathfrak{H} : \mathcal{M}f = \mathcal{N}f' \right\}.$$

Then it is clear that ker (H − μ) = {0}. To show that ran (H − μ) = H, let h ∈ H and define

$$f = (\mathfrak{M} - \mu \mathfrak{N})^{-1} \mathfrak{N} h \quad \text{and} \quad f' = \mu f + h.$$

From this definition one sees that

$$\mathcal{N}f' = \mu \mathcal{N}f + \mathcal{N}h = \mu \mathcal{N}f + (\mathcal{M} - \mu \mathcal{N})f = \mathcal{M}f,$$

which shows {f,f- } ∈ H. Furthermore one sees that f- − μf = h, which then implies that ran (H − μ) = H. Hence, μ ∈ ρ(H).

It remains to show the parametrization (1.10.12) if μ ∈ ρ(H) or, equivalently, (M<sup>−</sup> <sup>μ</sup>N)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(F, <sup>H</sup>). For this note that <sup>H</sup> <sup>=</sup> {(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup>, I <sup>+</sup> <sup>μ</sup>(<sup>H</sup> <sup>−</sup> <sup>μ</sup>)−<sup>1</sup>}, and that (1.10.13) and (1.10.14) imply

$$(H - \mu)^{-1} = (\mathcal{M} - \mu \mathcal{N})^{-1} \mathcal{N} \quad \text{and} \quad I + \mu (H - \mu)^{-1} = (\mathcal{M} - \mu \mathcal{N})^{-1} \mathcal{M}.$$

This gives (1.10.12). -

In the next corollary the self-adjoint, maximal dissipative, and maximal accumulative relations are treated.

**Corollary 1.10.8.** Let H be a relation in H. Then the following statements hold:

(i) H is self-adjoint if and only if there exist A, B ∈ **B**(H), such that

$$H = \left\{ \{ \mathcal{A}e, \mathcal{B}e \} : e \in \mathfrak{H} \right\} \tag{1.10.15}$$

holds with

$$\operatorname{Im} \left( \mathcal{A}^\* \mathcal{B} \right) = 0 \quad \text{and} \quad \left( \mathcal{B} - \mu \mathcal{A} \right)^{-1} \in \mathbf{B}(\mathfrak{H}).$$

for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−.

(ii) H is maximal dissipative if and only if there exist A, B ∈ **B**(H), such that (1.10.15) holds with

$$\operatorname{Im} \left( \mathcal{A}^\* \mathcal{B} \right) \ge 0 \quad \text{and} \quad (\mathcal{B} - \mu \mathcal{A})^{-1} \in \mathbf{B}(\mathfrak{H}).$$

for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−.

(iii) H is maximal accumulative if and only if there exist A, B ∈ **B**(H), such that (1.10.15) holds with

$$\operatorname{Im} \left( \mathcal{A}^\* \mathcal{B} \right) \le 0 \quad \text{and} \quad \left( \mathcal{B} - \mu \mathcal{A} \right)^{-1} \in \mathbf{B}(\mathfrak{H}).$$

for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+.

If A, B ∈ **B**(H) are chosen such that also (1.10.7) is satisfied, then H has the representation

$$H = \left\{ \{f, f'\} \in \mathfrak{H} \times \mathfrak{H} : \mathcal{B}f = \mathcal{A}f' \right\}.$$

Proof. It has been shown in Theorem 1.10.5 that a relation H is closed with μ ∈ ρ(H) if and only if it admits the representation (1.10.15) with A, B ∈ **B**(H) such that (<sup>B</sup> <sup>−</sup> <sup>μ</sup>A)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H). Note that when <sup>H</sup> is given in this way, then {f,f- } ∈ H if and only if {f,f- } = {Ae, Be} for some e ∈ H. The identity

$$(f',f) = (\mathcal{B}e, \mathcal{A}e) = (\mathcal{A}^\* \mathcal{B}e, e)$$

shows that H is dissipative, accumulative, or symmetric if and only if

$$\operatorname{Im}\left(\mathcal{A}^\*\mathcal{B}\right) \ge 0, \quad \operatorname{Im}\left(\mathcal{A}^\*\mathcal{B}\right) \le 0, \quad \text{or} \quad \operatorname{Im}\left(\mathcal{A}^\*\mathcal{B}\right) = 0, \quad \text{respectively.}$$

Furthermore, it is clear that <sup>H</sup> is maximal dissipative if <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−, maximal accumulative if <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup>, and self-adjoint if <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−. -

The next corollary provides a special representing pair {A, B} for a selfadjoint relation H.

**Corollary 1.10.9.** Let H be a relation in H. Then H is self-adjoint if and only if there exist A, B ∈ **B**(H) such that

$$\mathcal{A}^\* \mathcal{B} = \mathcal{B}^\* \mathcal{A}, \quad \mathcal{A} \mathcal{B}^\* = \mathcal{B} \mathcal{A}^\*, \quad \mathcal{A}^\* \mathcal{A} + \mathcal{B}^\* \mathcal{B} = I = \mathcal{A} \mathcal{A}^\* + \mathcal{B} \mathcal{B}^\*. \tag{1.10.16}$$

Proof. Assume that H is self-adjoint and define

$$\mathcal{A} = \frac{1}{2} \left( I - \mathbb{C}\_{-i}[H] \right) \quad \text{and} \quad \mathcal{B} = \frac{1}{2} \left( i + i \mathbb{C}\_{-i}[H] \right),$$

where C−i[H] denotes the Cayley transform of H (with respect to the point μ = −i) in Definition 1.1.13. A straightforward calculation using the identity (C−i[H])−<sup>1</sup> <sup>=</sup> <sup>C</sup>i[H]=(C−i[H])<sup>∗</sup> (see Lemma 1.3.11) shows that the properties in (1.10.16) are satisfied.

Conversely, it suffices to remark that for μ = ±i

$$(\mathcal{B} + \mu \mathcal{A})^\*(\mathcal{B} + \mu \mathcal{A}) = I = (\mathcal{B} + \mu \mathcal{A})(\mathcal{B} + \mu \mathcal{A})^\*$$

follows from (1.10.16). This shows (<sup>B</sup> <sup>+</sup> <sup>μ</sup>A)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) for <sup>μ</sup> <sup>=</sup> <sup>±</sup>i. Furthermore, the first condition in (1.10.16) shows Im (A∗B) = 0 and now Corollary 1.10.8 (i) implies that H is self-adjoint. -

In Theorem 1.10.5 and afterwards special attention was paid to representations of H of the form (1.10.6) and (1.10.8) under the assumption that ρ(H) = ∅. In the next proposition this assumption is dropped.

**Proposition 1.10.10.** Let the relation H = {A, B} from H to K be given by (1.10.1) with A ∈ **B**(E, H), B ∈ **B**(E,K), and assume that

$$
\mathcal{A}^\* \mathcal{A} + \mathcal{B}^\* \mathcal{B} = I. \tag{1.10.17}
$$

Then the adjoint H<sup>∗</sup> from K to H has the parametrization

$$H^\* = \left\{ \left\{ (I - \mathcal{B}\mathcal{B}^\*)\varphi + \mathcal{B}\mathcal{A}^\*\psi, \mathcal{A}\mathcal{B}^\*\varphi + (I - \mathcal{A}\mathcal{A}^\*)\psi \right\} : \varphi \in \mathfrak{K}, \psi \in \mathfrak{H} \right\}.$$

Consequently, H is given by all {f,f- } ∈ H × K for which

$$(I - \mathcal{A}\mathcal{A}^\*)f = \mathcal{A}\mathcal{B}^\*f', \quad \mathcal{B}\mathcal{A}^\*f = (I - \mathcal{B}\mathcal{B}^\*)f'.\tag{1.10.18}$$

Proof. The assumption (1.10.17) and Proposition 1.10.3 imply that the relation H = {A, B} is closed. Let J{h, k} = {k, −h} be the flip-flop operator from H × K to K×H in (1.3.1). Then JH = {B, −A} is a closed relation from K to H. It follows from (1.10.17) that

$$\left( \begin{array}{c} \mathcal{B} \\ -\mathcal{A} \end{array} \right)^{\*} \left( \begin{array}{c} \mathcal{B} \\ -\mathcal{A} \end{array} \right) = I, \quad \text{and hence } \text{ran } \begin{pmatrix} \mathcal{B} \\ -\mathcal{A} \end{pmatrix}^{\*} = \mathfrak{E}.$$

This implies that the orthogonal projection PJH in K × H onto JH has the form

$$P\_{JH} = \begin{pmatrix} \mathcal{B} \\ -\mathcal{A} \end{pmatrix} \begin{pmatrix} \mathcal{B} \\ -\mathcal{A} \end{pmatrix}^\* = \begin{pmatrix} \mathcal{B}\mathcal{B}^\* & -\mathcal{B}\mathcal{A}^\* \\ -\mathcal{A}\mathcal{B}^\* & \mathcal{A}\mathcal{A}^\* \end{pmatrix}.$$

Since H<sup>∗</sup> = (JH)<sup>⊥</sup> by (1.3.2), the orthogonal projection onto H<sup>∗</sup> is given by

$$P\_{H^{\*}} = I - P\_{JH} = \begin{pmatrix} I - \mathcal{B}\mathcal{B}^{\*} & \mathcal{B}\mathcal{A}^{\*} \\ \mathcal{A}\mathcal{B}^{\*} & I - \mathcal{A}\mathcal{A}^{\*} \end{pmatrix},$$

and this leads to the form of H<sup>∗</sup> in the proposition. It then follows from (1.10.3) that H∗∗ = H consists of all {f,f- } ∈ <sup>H</sup> <sup>×</sup> <sup>K</sup> for which (1.10.18) holds. -

## **1.11 Resolvent operators with respect to a bounded operator**

Many of the results in this chapter are phrased for (<sup>H</sup> <sup>−</sup>λ)−1, where <sup>H</sup> is a relation in <sup>H</sup> and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. In the rest of this text there will be several occasions to use similar results phrased for (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> when <sup>R</sup> <sup>∈</sup> **<sup>B</sup>**(H). A brief survey is offered.

For a relation H in H recall that the difference H −R is a well-defined relation in H given by

$$H - R = \left\{ \{h, h' - Rh\} \, : \, \{h, h'\} \in H \right\},$$

and that H − R is closed whenever H is closed. It is clear that

$$\ker\left(H - R\right)^{-1} = \text{mul}\left(H - R\right) = \text{mul}\,H.\tag{1.11.1}$$

The next lemma is a variant of Lemma 1.1.8 in the present context. The proof is not repeated.

**Lemma 1.11.1.** Let H be a relation in H. If R ∈ **B**(H) and ker (H − R) = {0}, then

$$H = \left\{ \left\{ \left( H - R \right)^{-1} f, \left( I + R(H - R)^{-1} \right) f \right\} : f \in \text{ran} \left( H - R \right) \right\}.$$

The next proposition is concerned with the resolvent identity as in Proposition 1.1.7, but in the present context.

**Lemma 1.11.2.** Let H be a relation in H and let R, S ∈ **B**(H). Then

$$(H - R)^{-1} - (H - S)^{-1} = (H - R)^{-1}(R - S)(H - S)^{-1}.\tag{1.11.2}$$

If ker (<sup>H</sup> <sup>−</sup> <sup>R</sup>) = {0} and ker (<sup>H</sup> <sup>−</sup> <sup>S</sup>) = {0}, then (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> and (<sup>H</sup> <sup>−</sup> <sup>S</sup>)−<sup>1</sup> are linear operators with the same kernel mul H.

Proof. For the inclusion (⊂), let {h, h- − h--} ∈ (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> <sup>−</sup> (<sup>H</sup> <sup>−</sup> <sup>S</sup>)−<sup>1</sup> with {h, h- } ∈ (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> and {h, h--} ∈ (<sup>H</sup> <sup>−</sup> <sup>S</sup>)−<sup>1</sup>. This gives

$$\{h', h + Rh'\} \in H \quad \text{and} \quad \{h'', h + Sh''\} \in H,$$

which shows that {h- − h--, Rh- − Sh--} ∈ H, and thus

$$\{(R-S)h'',h'-h''\} \in (H-R)^{-1}.$$

Since {h, h--} ∈ (<sup>H</sup> <sup>−</sup> <sup>S</sup>)−<sup>1</sup> and {h--,(R − S)h--} ∈ R − S, one concludes that {h,(R − S)h--} ∈ (<sup>R</sup> <sup>−</sup> <sup>S</sup>)(<sup>H</sup> <sup>−</sup> <sup>S</sup>)−1. Hence, the element {h, h- − h--} belongs to the relation (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−1(<sup>R</sup> <sup>−</sup> <sup>S</sup>)(<sup>H</sup> <sup>−</sup> <sup>S</sup>)−1, which shows the inclusion.

For the inclusion (⊃), let {h, h- } ∈ (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−1(<sup>R</sup> <sup>−</sup> <sup>S</sup>)(<sup>H</sup> <sup>−</sup> <sup>S</sup>)−1. Then by definition there exists k ∈ H such that

$$\{h, k\} \in (H - S)^{-1} \quad \text{and} \quad \{(R - S)k, h'\} \in (H - R)^{-1},$$

as {k,(R − S)k} ∈ R − S. It is clear from {k, h} ∈ H − S that

$$\{h + (S - R)k, k\} \in (H - R)^{-1}.$$

Thus, it follows that {h, h- <sup>+</sup> <sup>k</sup>} ∈ (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−1. Hence, {h, h- } = {h, h- + k − k} belongs to (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> <sup>−</sup> (<sup>H</sup> <sup>−</sup> <sup>S</sup>)−1, which shows the inclusion.

The last statements follow directly from (1.11.1). -

Observe that if (<sup>H</sup> <sup>−</sup>R)−<sup>1</sup> and (<sup>H</sup> <sup>−</sup>S)−<sup>1</sup> belong to **<sup>B</sup>**(H), then the resolvent identity (1.11.2) involves only operators from **B**(H) defined on all of H. Hence, the following lemma can be verified by direct computation.

**Lemma 1.11.3.** Let H be a closed relation in H, let R, S ∈ **B**(H), and assume that

$$(H - R)^{-1} \quad \text{and} \quad (H - S)^{-1} \in \mathbf{B}(\mathfrak{H}).$$

Then the operator <sup>I</sup> + (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−1(<sup>R</sup> <sup>−</sup> <sup>S</sup>) <sup>∈</sup> **<sup>B</sup>**(H) is boundedly invertible, with inverse given by

$$\left[I + (H - R)^{-1}(R - S)\right]^{-1} = I - (H - S)^{-1}(R - S). \tag{1.11.3}$$

Likewise, the operator <sup>I</sup> <sup>−</sup> (<sup>R</sup> <sup>−</sup> <sup>S</sup>)(<sup>H</sup> <sup>−</sup> <sup>S</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) is boundedly invertible, with inverse given by

$$\left[I - (R - S)(H - S)^{-1}\right]^{-1} = I + (R - S)(H - R)^{-1}.\tag{1.11.4}$$

Under the conditions of Lemma 1.11.3 it follows by rewriting the resolvent identity (1.11.2) that

$$\left[\left(H - R\right)^{-1} = \left[I + (H - R)^{-1}(R - S)\right](H - S)^{-1} \tag{1.11.5}$$

and

$$\left[ (H - R)^{-1} \left[ I - (R - S)(H - S)^{-1} \right] = (H - S)^{-1}; \right. \tag{1.11.6}$$

here the factors are bounded and boundedly invertible by Lemma 1.11.3. Hence, the identities (1.11.3) and (1.11.4) lead to the following useful result, expressing the resolvent difference (H1−R)−1−(H2−R)−<sup>1</sup> in terms of the resolvent difference (H<sup>1</sup> <sup>−</sup> <sup>S</sup>)−<sup>1</sup> <sup>−</sup> (H<sup>2</sup> <sup>−</sup> <sup>S</sup>)−1.

**Lemma 1.11.4.** Let H<sup>1</sup> and H<sup>2</sup> be closed relations in H, and let R and S be operators in **<sup>B</sup>**(H). For <sup>i</sup> = 1, <sup>2</sup> assume that (H<sup>i</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> and (H<sup>i</sup> <sup>−</sup> <sup>S</sup>)−<sup>1</sup> belong to **B**(H). Then the bounded operators

$$I - (H\_1 - S)^{-1}(R - S), \quad I - (R - S)(H\_2 - S)^{-1}$$

are boundedly invertible, and

$$\begin{aligned} \left[ (H\_1 - R)^{-1} - (H\_2 - R)^{-1} \right. \\ &= \left[ I - (H\_1 - S)^{-1} (R - S) \right]^{-1} \\ &\left[ (H\_1 - S)^{-1} - (H\_2 - S)^{-1} \right] \left[ I - (R - S)(H\_2 - S)^{-1} \right]^{-1} .\end{aligned}$$

Proof. It follows from the identities (1.11.5)–(1.11.6) and Lemma 1.11.3 that

$$\left[\left(H\_1 - R\right)^{-1} = \left[I - \left(H\_1 - S\right)^{-1} \left(R - S\right)\right]^{-1} \left(H\_1 - S\right)^{-1}\right]$$

and

$$(H\_2 - R)^{-1} = (H\_2 - S)^{-1} \left[ I - (R - S)(H\_2 - S)^{-1} \right]^{-1}.$$

Subtracting these identities yields the desired result. -

The question arises for what relations H in H and R ∈ **B**(H) one can conclude that (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H). The following lemma presents some sufficient conditions.

**Lemma 1.11.5.** Let H be a closed relation in H and let R ∈ **B**(H) with Im R ≥ ε for some ε > 0. Then the following statements hold:


$$\mathbb{D}$$

Proof. (i) Since the relation H is maximal accumulative, it follows that H is closed, which implies that the relation (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> is closed. In order to show that (<sup>H</sup> <sup>−</sup>R)−<sup>1</sup> is a bounded operator, let {f,f- } ∈ (<sup>H</sup> <sup>−</sup>R)−<sup>1</sup>. Then {f- , f +Rf- } ∈ H and, since H is accumulative and Im R ≥ ε, this shows that

$$\operatorname{Im}\left(f, f'\right) + \varepsilon \|f'\|^2 \le \operatorname{Im}\left(f, f'\right) + \left( (\operatorname{Im} R)f', f'\right) = \operatorname{Im}\left(f + Rf', f'\right) \le 0,\tag{1.11.7}$$

which leads to

$$\varepsilon \left\| f' \right\|^2 \le -\text{Im}\left( f, f' \right) = \text{Im}\left( f', f \right) \le \left\| f' \right\| \left\| f \right\|.$$

This implies that the closed relation (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> is a bounded operator. Note also that (1.11.7) implies Im (f,f- ) ≤ 0 for {f,f- } ∈ (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−1. Hence, Im (f- , f) ≥ 0 and (<sup>H</sup> <sup>−</sup> <sup>R</sup>)−<sup>1</sup> is dissipative.

To show that (H−R)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) it therefore suffices to verify that ran (H−R) is dense in H. Note that (ran (H − R))<sup>⊥</sup> = ker (H<sup>∗</sup> − R∗) by Proposition 1.3.2 and Proposition 1.3.9. Now observe that ker (H<sup>∗</sup> − R∗) = {0}. To see this, assume that f ∈ ker (H<sup>∗</sup> − R∗) or, equivalently, {f,R∗f} ∈ H∗. Since H<sup>∗</sup> is maximal dissipative by Proposition 1.6.7 one obtains that

$$0 \le \operatorname{Im} \left( R^\* f, f \right) = \operatorname{Im} \left( f, Rf \right) = - \left( (\operatorname{Im} R) f, f \right) \le - \varepsilon \| f \|^2,$$

which gives f = 0.

(ii) & (iii) The proofs are similar. -

Let H be a closed relation in H and let A ∈ **B**(E, H), B ∈ **B**(E, H) be a tight representing pair for H, that is,

$$H = \left\{ \{ \mathcal{A}e, \mathcal{B}e \} \, : \, e \in \mathfrak{E} \right\} \tag{1.11.8}$$

and ker A ∩ ker B = {0}; cf. Theorem 1.10.1 and Proposition 1.10.3. Note that if for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> one has (<sup>B</sup> <sup>−</sup> <sup>μ</sup>A)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(E), then the tightness condition ker A ∩ ker B = {0} is automatically satisfied.

**Lemma 1.11.6.** Let H be a closed relation in H and assume that H has the tight representation (1.11.8), where A, B ∈ **B**(E, H). Then for any R ∈ **B**(H) one has that

$$(H - R)^{-1} \in \mathbf{B}(\mathfrak{H}) \quad \Leftrightarrow \quad (\mathcal{B} - R\mathcal{A})^{-1} \in \mathbf{B}(\mathfrak{H}, \mathfrak{E}),\tag{1.11.9}$$

in which case

$$\left(\boldsymbol{H} - \boldsymbol{R}\right)^{-1} = \mathcal{A}\left(\boldsymbol{\mathcal{B}} - \boldsymbol{R}\boldsymbol{\mathcal{A}}\right)^{-1}.\tag{1.11.10}$$

Proof. One sees by the definition of H − R that

$$H - R = \left\{ \left\{ \mathcal{A}e, (\mathcal{B} - R\mathcal{A})e \right\} : e \in \mathfrak{E} \right\},\tag{1.11.11}$$

and thus it follows directly that

$$\text{ran}\,(H - R) = \text{ran}\,(\mathcal{B} - R\mathcal{A}) \quad \text{and} \quad \text{ker}\,(H - R) = \mathcal{A}\,\text{ker}\,(\mathcal{B} - R\mathcal{A}).$$

Furthermore, it is clear that in general

$$\ker\left(\mathcal{B} - R\mathcal{A}\right) = \{0\} \quad \Rightarrow \quad \ker\left(H - R\right) = \{0\}.$$

Moreover, under the assumption ker A ∩ ker B = {0} one concludes that

$$\ker\left(H - R\right) = \{0\} \quad \Rightarrow \quad \ker\left(\mathcal{B} - R\mathcal{A}\right) = \{0\}.$$

To see this, let h ∈ ker (B − RA), which means that Bh = RAh. Due to

$$\ker\left(H - R\right) = \mathcal{A}\ker\left(\mathcal{B} - R\mathcal{A}\right).$$

one has Ah ∈ ker (H − R) = {0} and thus also Bh = 0. The tightness condition now implies that h = 0.

In order to prove the equivalence (1.11.9), assume that H has the tight representation (1.11.8). Then it is clear that ker (H − R) = {0} and ran (H − R) = H if and only if ker (B − RA) = {0} and ran (B − RA) = H. This implies (1.11.9). Moreover, (1.11.11) yields (1.11.10). -

## **1.12 Nevanlinna families and their representations**

Let <sup>H</sup> be a Hilbert space and let <sup>N</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(H) be a holomorphic function. Then N is a Nevanlinna function (or **B**(H)-valued Nevanlinna function) if

$$(\operatorname{Im}\lambda)(\operatorname{Im}N(\lambda)) \ge 0, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}, \tag{1.12.1}$$

and <sup>N</sup> satisfies the symmetry condition <sup>N</sup>(λ) = <sup>N</sup>(λ)<sup>∗</sup> for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>; see Definition A.4.1. If, in addition, the imaginary part Im N(λ) is boundedly invertible for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, then the Nevanlinna function <sup>N</sup> is said to be uniformly strict; cf. Definition A.4.7. Observe that by (1.12.1) the operators <sup>N</sup>(λ) <sup>∈</sup> **<sup>B</sup>**(H) are dissipative (accumulative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−). In this section the notion of a Nevanlinna function is extended to a so-called Nevanlinna family, that is, a family of relations <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, in <sup>H</sup> which are maximal dissipative or maximal accumulative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> or <sup>C</sup>−, respectively, and satisfy a symmetry condition and a holomorphy condition.

**Definition 1.12.1.** A family of relations <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, in <sup>H</sup> is called a Nevanlinna family if the following conditions are satisfied:


Note that condition (i) in this definition and Theorem 1.6.4 ensure that

$$
\mathbb{C}^- \subset \rho(Z(\lambda)), \ \lambda \in \mathbb{C}^+, \qquad \text{and} \qquad \mathbb{C}^+ \subset \rho(Z(\lambda)), \ \lambda \in \mathbb{C}^-. \tag{1.12.2}
$$

In particular, one has (Z(λ)+μ)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) in (iii). The condition <sup>Z</sup>(λ) = <sup>Z</sup>(λ)<sup>∗</sup> in (ii) leads to the following conclusions. First of all, it follows from Proposition 1.6.7 that <sup>Z</sup>(λ) is maximal accumulative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> if and only if <sup>Z</sup>(λ) is maximal dissipative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup>. Secondly, <sup>λ</sup> → (Z(λ) + <sup>μ</sup>)−<sup>1</sup> is holomorphic on <sup>C</sup><sup>−</sup> if and only if <sup>λ</sup> → (Z(λ) + <sup>μ</sup>)−<sup>1</sup> is holomorphic on <sup>C</sup><sup>+</sup>; this follows from the fact that for a **B**(H)-valued function H one has that λ → H(λ) is holomorphic if and only if λ → H(λ)<sup>∗</sup> is holomorphic. Furthermore, each element Z(λ) is a closed relation in H and therefore it has a tight operator representation by Theorem 1.10.1 and Proposition 1.10.3. In particular, one has the following general representation result.

**Proposition 1.12.2.** Let <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, be a Nevanlinna family in <sup>H</sup>. Then <sup>Z</sup>(λ) has the tight representation

$$Z(\lambda) = \left\{ \{A(\lambda)h, B(\lambda)h\} : h \in \mathfrak{H} \right\}, \quad \lambda \in \mathbb{C} \mid \mathbb{R}.\tag{1.12.3}$$

Here {A, B} is a pair of **<sup>B</sup>**(H)-valued functions on <sup>C</sup> \ <sup>R</sup> which satisfies:


If the pair {C, D} is another tight representation of the Nevanlinna family Z(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, with the above properties, then there exists a bounded and boundedly invertible holomorphic operator family <sup>X</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, such that

$$C(\lambda) = A(\lambda)X(\lambda), \quad D(\lambda) = B(\lambda)X(\lambda), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.\tag{1.12.4}$$

Proof. Let <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, be a Nevanlinna family, choose <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> as in Definition 1.12.1, and define A(λ) and B(λ) by

$$A(\lambda) = \begin{cases} (Z(\lambda) + \mu)^{-1}, & \lambda \in \mathbb{C}^+, \\ (Z(\lambda) + \overline{\mu})^{-1}, & \lambda \in \mathbb{C}^-, \end{cases} \tag{1.12.5}$$

and

$$B(\lambda) = \begin{cases} I - \mu (Z(\lambda) + \mu)^{-1}, & \lambda \in \mathbb{C}^+, \\ I - \overline{\mu} (Z(\lambda) + \overline{\mu})^{-1}, & \lambda \in \mathbb{C}^-. \end{cases} \tag{1.12.6}$$

Then it follows from (1.12.2) that A(λ) and B(λ) belong to **B**(H), and Lemma 1.2.4 shows that Z(λ) has the representation (1.12.3). Definition 1.12.1 (iii) implies that the mappings λ → A(λ) and λ → B(λ) are holomorphic, which shows (a). Furthermore, it follows from (1.12.5) and (1.12.6) that <sup>B</sup>(λ)+μA(λ) = <sup>I</sup>, <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup>, and <sup>B</sup>(λ) + μA(λ) = <sup>I</sup>, <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−, which shows (d). Since by (i) <sup>Z</sup>(λ) is dissipative (accumulative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−) it follows from

$$\operatorname{Im}\left(A(\lambda)^{\*}B(\lambda)h,h\right) = \operatorname{Im}\left(B(\lambda)h,A(\lambda)h\right), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \quad h \in \mathfrak{H},\tag{1.12.7}$$

that (b) holds. It is a direct consequence of (1.12.3) that

$$Z(\overline{\lambda})^\* = \left\{ \{k, k'\} \in \mathfrak{H} \times \mathfrak{H} \, : \, B(\overline{\lambda})^\* k = A(\overline{\lambda})^\* k' \right\};$$

cf. (1.10.2) and (1.10.3). Since by (ii) Z(λ) = Z(λ)<sup>∗</sup> it follows from (1.12.3) that (c) holds.

The representation in (1.12.3) with A(λ) and B(λ) in (1.12.5)–(1.12.6) is tight for each <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, that is,

$$\ker A(\lambda) \cap \ker B(\lambda) = \{0\}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

In order to see this, assume that <sup>A</sup>(λ)<sup>g</sup> = 0 and <sup>B</sup>(λ)<sup>g</sup> = 0 for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> with some g ∈ H. Then (B(λ) + μA(λ))g = 0 and hence (d) implies that g = 0. Likewise, the same conclusion holds when <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−.

Now assume that {C, D} is another tight representation of the same Nevanlinna family <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, i.e., assume that <sup>Z</sup>(λ) is also given by

$$Z(\lambda) = \left\{ \{ C(\lambda)h, D(\lambda)h \} : h \in \mathfrak{H} \right\}.$$

It follows from Lemma 1.10.2 that there exist bounded bijective operators X(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, such that (1.12.4) holds. In particular, for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> one has

$$D(\lambda) + \mu C(\lambda) = \left(B(\lambda) + \mu A(\lambda)\right)X(\lambda)$$

and hence the function

$$\lambda \mapsto X(\lambda) = \left( B(\lambda) + \mu A(\lambda) \right)^{-1} (D(\lambda) + \mu C(\lambda)), \quad \lambda \in \mathbb{C}^+,$$

is holomorphic on C+. Similarly, on C<sup>−</sup> the function X has the form

$$
\lambda \mapsto X(\lambda) = \left( B(\lambda) + \overline{\mu} A(\lambda) \right)^{-1} (D(\lambda) + \overline{\mu} C(\lambda)), \quad \lambda \in \mathbb{C}^-,
$$

and is holomorphic. -

**Definition 1.12.3.** Let {A, B} be a pair of **<sup>B</sup>**(H)-valued functions on <sup>C</sup> \ <sup>R</sup>. Then {A, B} is called a Nevanlinna pair if it satisfies the properties (a), (b), (c), and (d) in Proposition 1.12.2.

Hence, by Proposition 1.12.2 each Nevanlinna family is represented by a Nevanlinna pair. The converse is also true: each Nevanlinna pair defines a Nevanlinna family.

**Proposition 1.12.4.** Let {A, B} be a Nevanlinna pair in <sup>H</sup>. Then <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, defined by (1.12.3) is a Nevanlinna family in H.

Proof. Let {A, B} be a Nevanlinna pair and define the family <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, by (1.12.3). Then (b) implies that <sup>Z</sup>(λ) is dissipative (accumulative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−); cf. (1.12.7). It follows from (d) and the definition of <sup>Z</sup>(λ) that one has

$$\begin{aligned} \left(Z(\lambda) + \mu\right)^{-1} &= A(\lambda) \left(B(\lambda) + \mu A(\lambda)\right)^{-1} \in \mathbf{B}(\mathfrak{H}), \quad \lambda \in \mathbb{C}^+,\\ \left(Z(\lambda) + \overline{\mu}\right)^{-1} &= A(\lambda) \left(B(\lambda) + \overline{\mu} A(\lambda)\right)^{-1} \in \mathbf{B}(\mathfrak{H}), \quad \lambda \in \mathbb{C}^-, \end{aligned} \tag{1.12.8}$$

for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+. With (a) this shows that the holomorphy condition (iii) is satisfied. Moreover, from (1.12.8) and Theorem 1.6.4 it is now clear that Z(λ) is maximal dissipative (maximal accumulative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−), which is (i). From (c) one concludes

$$(A(\lambda)^\* \left( B(\overline{\lambda}) + \overline{\mu} A(\overline{\lambda}) \right) = \left( B(\lambda)^\* + \overline{\mu} A(\lambda)^\* \right) A(\overline{\lambda}), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

For <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> this reads

$$A\left(B(\lambda)^{\*} + \overline{\mu}A(\lambda)^{\*}\right)^{-1}A(\lambda)^{\*} = A(\overline{\lambda})\left(B(\overline{\lambda}) + \overline{\mu}A(\overline{\lambda})\right)^{-1},$$

so that

$$\begin{aligned} \left(Z(\lambda) + \mu\right)^{-\*} &= \left(B(\lambda)^{\*} + \overline{\mu}A(\lambda)^{\*}\right)^{-1}A(\lambda)^{\*} \\ &= A(\overline{\lambda})\left(B(\overline{\lambda}) + \overline{\mu}A(\overline{\lambda})\right)^{-1} = \left(Z(\overline{\lambda}) + \overline{\mu}\right)^{-1}. \end{aligned}$$

However, the left-hand side is equal to (Z(λ)<sup>∗</sup> + μ)−1, and hence it follows that <sup>Z</sup>(λ)<sup>∗</sup> <sup>=</sup> <sup>Z</sup>(λ) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>+. A similar reasoning is valid for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. Hence, (ii) follows and therefore <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, defined by (1.12.3) is a Nevanlinna family in H. -

In the next lemma it turns out that the conditions (iii) in the definition of a Nevanlinna family and the conditions (d) in the definition of a Nevanlinna pair hold for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

**Lemma 1.12.5.** Let <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, be a Nevanlinna family in <sup>H</sup> and let {A, B} be a Nevanlinna pair in H. Then the following statements hold:


Proof. (i) Assume that Z(λ) satisfies (i) in Definition 1.12.1, so that, (1.12.2) holds. Fix ν, μ in the same half-plane as <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then one has <sup>−</sup><sup>ν</sup> <sup>∈</sup> <sup>ρ</sup>(Z(λ)) and −μ ∈ ρ(Z(λ)) According to the resolvent formula one has

$$\begin{split} \left( Z(\lambda) + \mu \right)^{-1} - \left( Z(\lambda) + \nu \right)^{-1} &= (\nu - \mu) (Z(\lambda) + \nu)^{-1} (Z(\lambda) + \mu)^{-1} \\ &= (\nu - \mu) (Z(\lambda) + \mu)^{-1} (Z(\lambda) + \nu)^{-1} .\end{split} \tag{1.12.9}$$

Assume that for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> the mapping <sup>λ</sup> → (Z(λ) + <sup>μ</sup>)−<sup>1</sup> is holomorphic on <sup>C</sup><sup>+</sup>. Let <sup>ν</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup>. Then it follows from the resolvent formula (1.12.9) that

$$(Z(\lambda) + \mu)^{-1} = \left(I + (\nu - \mu)(Z(\lambda) + \mu)^{-1}\right)(Z(\lambda) + \nu)^{-1}.$$

The factor <sup>I</sup> + (<sup>ν</sup> <sup>−</sup> <sup>μ</sup>)(Z(λ) + <sup>μ</sup>)−<sup>1</sup> is holomorphic in <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and according to Lemma 1.6.10 it is boundedly invertible. Hence, its inverse is also holomorphic in <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and one obtains for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup>

$$\left( Z(\lambda) + \nu \right)^{-1} = \left( I + (\nu - \mu) \left( Z(\lambda) + \mu \right)^{-1} \right)^{-1} \left( Z(\lambda) + \mu \right)^{-1}.$$

Hence, <sup>λ</sup> → (Z(λ) + <sup>ν</sup>)−<sup>1</sup> is holomorphic on <sup>C</sup>+. The corresponding statement for the half-plane C<sup>−</sup> follows from the symmetry of the Nevanlinna family.

(ii) Assume that for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> one has (B(λ) + μA(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H). Define <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup>+, by (1.12.3), and note that this representation is tight. Then <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup>+, is maximal dissipative and hence (Z(λ) + <sup>μ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+. Now Lemma 1.11.6 yields (B(λ) + μA(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+. A similar reasoning holds for <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−. -

Now the conditions (iii) in Definition 1.12.1 and, likewise, the conditions (d) in Definition 1.12.3 will be further relaxed in a useful way.

**Proposition 1.12.6.** Let <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, be a Nevanlinna family and let {A, B} be a Nevanlinna pair in H such that the representation (1.12.3) holds. Let N be a uniformly strict Nevanlinna function with values in **B**(H). Then the conditions in (iii) in Definition 1.12.1 may be replaced by

λ → Z(λ) + N(λ) −<sup>1</sup> is holomorphic on <sup>C</sup> \ <sup>R</sup> with values in **<sup>B</sup>**(H).

Moreover, the conditions in (d) in Definition 1.12.3 may be replaced by

$$\left(B(\lambda) + N(\lambda)A(\lambda)\right)^{-1} \in \mathbf{B}(\mathfrak{H}), \quad \lambda \in \mathbb{C} \ \backslash \mathbb{R}.$$

In particular, the choice N(λ) = λ is allowed for these statements. Moreover,

$$-\left(Z(\lambda) + N(\lambda)\right)^{-1} = -A(\lambda)\left(B(\lambda) + N(\lambda)A(\lambda)\right)^{-1}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \tag{1.12.10}$$

defines a Nevanlinna function with values in **B**(H).

Proof. The proof will be given in three steps. In Step 1 it is shown how condition (iii) in Definition 1.12.1 and condition (d) in Definition 1.12.3 give rise to the stated conditions in the proposition. In Step 2 and Step 3 the reverse direction is traversed for Nevanlinna families and Nevanlinna pairs, respectively.

Step 1. Let Z(λ) be a Nevanlinna family in H and let {A, B} be a Nevanlinna pair in H as in Definition 1.12.1 and Definition 1.12.3, respectively, such that (1.12.3) holds. Then for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>

$$\left(Z(\lambda) + N(\lambda)\right)^{-1} \in \mathbf{B}(\mathfrak{H}) \quad \text{and} \quad \left(B(\lambda) + N(\lambda)A(\lambda)\right)^{-1} \in \mathbf{B}(\mathfrak{H}),\tag{1.12.11}$$

and the identity (1.12.10) holds and defines a Nevanlinna function with values in **<sup>B</sup>**(H). In fact, for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> the first assertion in (1.12.11) follows from Lemma 1.11.5 (i) with <sup>H</sup> <sup>=</sup> <sup>−</sup>Z(λ) and <sup>R</sup> <sup>=</sup> <sup>N</sup>(λ). Similarly, for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> Lemma 1.11.5 (ii) yields the first assertion in (1.12.11). The second assertion in (1.12.11) and the identity in (1.12.10) follow from Lemma 1.11.6. Since the functions A, B, and N are holomorphic, the identity in (1.12.10) defines a holomorphic function and Lemma 1.11.5 implies

$$(\operatorname{Im}\lambda)\operatorname{Im}\left(-(Z(\lambda)+N(\lambda))^{-1}h,h\right)\ge 0, \qquad \lambda\in\mathbb{C}\text{ }\backslash\mathbb{R},\ h\in\mathfrak{H}.$$

Furthermore, for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one has <sup>−</sup>(Z(λ) + <sup>N</sup>(λ))−∗ <sup>=</sup> <sup>−</sup>(Z(λ) + <sup>N</sup>(λ))−<sup>1</sup> by Definition 1.12.1 (ii) and the fact that N is a Nevanlinna function. Now it follows that the function in (1.12.10) is a Nevanlinna function with values in **B**(H).

Step 2. Let <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, satisfy (i) and (ii) of Definition 1.12.1, and assume that

λ → Z(λ) + N(λ) −<sup>1</sup> is holomorphic with values in **B**(H).

Define <sup>A</sup>(λ) and <sup>B</sup>(λ) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> by

$$A(\lambda) = \left(Z(\lambda) + N(\lambda)\right)^{-1} \quad \text{and} \quad B(\lambda) = I - N(\lambda)\left(Z(\lambda) + N(\lambda)\right)^{-1};$$

then it follows from Lemma 1.2.4 that the family <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, has the representation (1.12.3). Note that by assumption A(λ) and B(λ) belong to **B**(H) and that each of the mappings λ → A(λ) and λ → B(λ) is holomorphic. Since Z(λ) is maximal dissipative (maximal accumulative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−) it follows from Lemma 1.11.6 that for <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> the operator (B(λ) + μA(λ))−<sup>1</sup> belongs to **<sup>B</sup>**(H) when <sup>λ</sup> <sup>∈</sup> <sup>C</sup>+, the operator (B(λ) + μA(λ))−<sup>1</sup> belongs to **<sup>B</sup>**(H) when <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−, and

$$\begin{aligned} (Z(\lambda) + \mu)^{-1} &= A(\lambda) \left( B(\lambda) + \mu A(\lambda) \right)^{-1}, \quad \lambda \in \mathbb{C}^+, \\ (Z(\lambda) + \overline{\mu})^{-1} &= A(\lambda) \left( B(\lambda) + \overline{\mu} A(\lambda) \right)^{-1}, \quad \lambda \in \mathbb{C}^-. \end{aligned}$$

Since λ → A(λ) and λ → B(λ) are holomorphic, it follows that the mapping <sup>λ</sup> → (Z(λ) + <sup>μ</sup>)−<sup>1</sup> is holomorphic on <sup>C</sup><sup>+</sup> with values in **<sup>B</sup>**(H) and the mapping <sup>λ</sup> → (Z(λ) + <sup>μ</sup>)−<sup>1</sup> is holomorphic on <sup>C</sup><sup>−</sup> with values in **<sup>B</sup>**(H). Hence, (iii) in Definition 1.12.1 is satisfied.

Step 3. Let {A, B} satisfy (a), (b), and (c) of Definition 1.12.3, and assume that

$$\left(B(\lambda) + N(\lambda)A(\lambda)\right)^{-1} \in \mathbf{B}(\mathfrak{H})\,.$$

Define the family <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, by (1.12.3). It will be shown first that <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is a Nevanlinna family. In fact, it follows from the definition that

$$\left(Z(\lambda) + N(\lambda)\right)^{-1} = A(\lambda) \left(B(\lambda) + N(\lambda)A(\lambda)\right)^{-1} \in \mathbf{B}(\mathfrak{H}),$$

and via (a) one sees that

$$
\lambda \mapsto \left( Z(\lambda) + N(\lambda) \right)^{-1} \text{ is holomorphic with values in } \mathbf{B(f)}. \tag{1.12.12}
$$

Note that (b) shows that <sup>Z</sup>(λ) is dissipative (accumulative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−). In fact, <sup>Z</sup>(λ) is maximal dissipative (maximal accumulative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−). To see this, let Z- (λ) be an extension of <sup>Z</sup>(λ) which is dissipative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup>. Then clearly

$$\left(Z(\lambda) + N(\lambda)\right)^{-1} \subset \left(Z'(\lambda) + N(\lambda)\right)^{-1},\tag{1.12.13}$$

the left-hand side is an operator in **B**(H), and the right-hand side defines an operator. In fact, if {0, k} ∈ (Z- (λ) + <sup>N</sup>(λ))−1, then {k, <sup>−</sup>N(λ)k} ∈ <sup>Z</sup>- (λ) and as Z- (λ) is dissipative, it follows that Im (−N(λ)k, k) ≥ 0. On the other hand, one has Im (N(λ)k, k) ≥ 0 as N is a Nevanlinna function. Hence, k = 0 and (Z- (λ) + N(λ))−<sup>1</sup> is an operator. It follows that the inclusion in (1.12.13) is an equality and therefore Z- (λ) = <sup>Z</sup>(λ) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>+. Thus, <sup>Z</sup>(λ) is maximal dissipative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>+. A similar argument shows that <sup>Z</sup>(λ) is maximal accumulative for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. Hence, (i) in Definition 1.12.1 has been shown. It clearly follows from (c) that Z(λ) ⊂ Z(λ)∗, which implies that

$$\left(Z(\lambda) + N(\lambda)\right)^{-1} \subset \left(Z(\overline{\lambda})^\* + N(\lambda)\right)^{-1} = \left(Z(\overline{\lambda}) + N(\overline{\lambda})\right)^{-\*},$$

where in the last step it was used that N(λ) = N(λ)∗. The above inclusion is in fact an equality, since the operators on the left and on the right belong to **B**(H). Therefore, Z(λ) = Z(λ)∗, and hence (ii) in Definition 1.12.1 holds. Now it follows from (1.12.12) and Step 2 of this lemma that also (iii) in Definition 1.12.1 holds. Therefore, one concludes that <sup>Z</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, defined by (1.12.3) is a Nevanlinna family.

Now it follows from Proposition 1.12.2 that {A, B} is a Nevanlinna pair and thus, in particular, condition (d) holds. -

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 2 Boundary Triplets and Weyl Functions**

The basic properties of boundary triplets for closed symmetric operators or relations in Hilbert spaces are presented. These triplets give rise to a parametrization of the intermediate extensions of symmetric relations, in particular of the self-adjoint extensions. Closely related is the Kre˘ın formula which describes the resolvent operators of such intermediate extensions. The introduction of boundary triplets and a discussion of corresponding boundary value problems can be found in Section 2.1 and Section 2.2. Associated with a boundary triplet are the γ-field and the Weyl function, and these analytic objects are treated in Section 2.3. The existence and construction of boundary triplets is discussed in Section 2.4; their transformations are the contents of Section 2.5. Section 2.6 on Kre˘ın's resolvent formula for canonical extensions and a description of their spectra is central in this chapter. Furthermore, a discussion of self-adjoint exit space extensions, Straus families, and the Kre˘ın–Na˘ımark formula can be found Section ˇ 2.7. Some related perturbation problems are treated in Section 2.8.

## **2.1 Boundary triplets**

The following definition introduces a boundary triplet, one of the key objects in this text. It is based on the well-known Green or Lagrange formula together with an additional maximality condition.

**Definition 2.1.1.** Let S be a closed symmetric relation in a Hilbert space H. Then {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> if G is a Hilbert space and Γ0, Γ<sup>1</sup> : S<sup>∗</sup> → G are linear mappings such that the mapping Γ : S<sup>∗</sup> → G × G defined by

$$
\Gamma \dot{f} = \{\Gamma\_0 \dot{f}, \Gamma\_1 \dot{f}\}, \quad \dot{f} = \{f, f'\} \in S^\*,
$$

is surjective and the identity

$$(f',g)\_{\mathfrak{H}} - (f,g')\_{\mathfrak{H}} = (\Gamma\_1 \widehat{f}, \Gamma\_0 \widehat{g})\_{\mathfrak{H}} - (\Gamma\_0 \widehat{f}, \Gamma\_1 \widehat{g})\_{\mathfrak{H}} \tag{2.1.1}$$

holds for all f <sup>=</sup> {f,f- }, <sup>g</sup> <sup>=</sup> {g, g- } ∈ S∗.

107

Note that a symmetric relation S is densely defined if and only if S<sup>∗</sup> is an operator. In this case the boundary mappings Γ<sup>0</sup> and Γ<sup>1</sup> can be defined on dom S<sup>∗</sup> instead of (the graph of) S∗. More precisely, if {G, Γ0, Γ1} is a boundary triplet for S∗, then one defines boundary mappings Γ<sup>0</sup> and Γ<sup>1</sup> on dom S<sup>∗</sup> by the following identifications

$$
\Gamma\_0 f = \Gamma\_0 \dot{f}, \quad \Gamma\_1 f = \Gamma\_1 \dot{f}, \quad \dot{f} = \{f, f'\} \in S^\*.
$$

In the following treatment whenever S is a densely defined operator, boundary mappings defined on S<sup>∗</sup> and on dom S<sup>∗</sup> will be identified in this sense. After this identification (2.1.1) turns into

$$(S^\*f,g)\_{\mathfrak{H}} - (f,S^\*g)\_{\mathfrak{H}} = (\Gamma\_1f,\Gamma\_0g)\_{\mathfrak{G}} - (\Gamma\_0f,\Gamma\_1g)\_{\mathfrak{G}},\tag{2.1.2}$$

where f,g ∈ dom S∗. This formalism will be used in Chapter 6 and Chapter 8 in the treatment of ordinary and partial differential operators.

The identity (2.1.1) or the identity (2.1.2) is sometimes called the abstract Green identity or the abstract Lagrange identity; in this text mostly the terminology abstract Green identity will be used. This identity has a geometric interpretation which is best expressed in terms of the indefinite inner products

$$\begin{aligned} \left[\cdot,\cdot\right]\_{\mathfrak{H}^{2}} &:= \left(\mathcal{J}\_{\mathfrak{H}} \cdot \cdot \right)\_{\mathfrak{H}^{2}}, \qquad \mathcal{J}\_{\mathfrak{H}} = \begin{pmatrix} 0 & -iI\_{\mathfrak{H}} \\ iI\_{\mathfrak{H}} & 0 \end{pmatrix}, \\\left[\cdot,\cdot\right]\_{\mathfrak{G}^{2}} &:= \left(\mathcal{J}\_{\mathfrak{G}} \cdot \cdot \right)\_{\mathfrak{G}^{2}}, \qquad \mathcal{J}\_{\mathfrak{G}} = \begin{pmatrix} 0 & -iI\_{\mathfrak{G}} \\ iI\_{\mathfrak{G}} & 0 \end{pmatrix}, \end{aligned} \tag{2.1.3}$$

where J<sup>H</sup> = J<sup>∗</sup> <sup>H</sup> = J−<sup>1</sup> <sup>H</sup> <sup>∈</sup> **<sup>B</sup>**(H2) and <sup>J</sup><sup>G</sup> <sup>=</sup> <sup>J</sup><sup>∗</sup> <sup>G</sup> = J−<sup>1</sup> <sup>G</sup> <sup>∈</sup> **<sup>B</sup>**(G2); cf. Section 1.8. By means of these inner products, the identity (2.1.1) can be rewritten as

$$\left[\widehat{f}, \widehat{g}\right]\_{\mathfrak{H}^2} = \left[\Gamma \widehat{f}, \Gamma \widehat{g}\right]\_{\mathfrak{H}^2} \tag{2.1.4}$$

for f <sup>=</sup> {f,f- }, <sup>g</sup> <sup>=</sup> {g, g- } ∈ S∗. Later the scalar products in (2.1.1), (2.1.2), and (2.1.4) will be used without indices H and G, respectively, when there is no danger of confusion. Recall that the adjoint A<sup>∗</sup> of a relation A in H can be written as an orthogonal complement with respect to the inner product [[·, ·]], that is <sup>A</sup><sup>∗</sup> <sup>=</sup> <sup>A</sup>[[⊥]]; cf. Section 1.8.

Some elementary but important properties of the boundary mappings are collected in the following proposition.

**Proposition 2.1.2.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} is a boundary triplet for S∗. Then the following statements hold:


Proof. (i) The continuity of Γ : S<sup>∗</sup> → G×G is essentially a consequence of the fact that Γ is isometric in the sense of (2.1.4). More precisely, by definition the mapping Γ is surjective, and since dom Γ = S<sup>∗</sup> is closed it follows from Lemma 1.8.1 that Γ is continuous. Clearly, the mappings Γ<sup>0</sup> and Γ<sup>1</sup> are also surjective and continuous.

(ii) In order to show that ker Γ ⊂ S, let f <sup>∈</sup> ker Γ. Then it follows from (2.1.4) that [[f, <sup>g</sup>]] = [[Γf, <sup>Γ</sup>g]] = 0 for all <sup>g</sup> <sup>∈</sup> <sup>S</sup>∗, which implies <sup>f</sup> <sup>∈</sup> (S∗)[[⊥]] <sup>=</sup> <sup>S</sup>∗∗ <sup>=</sup> <sup>S</sup>, since S is closed. Hence, ker Γ ⊂ S has been shown. To show that S ⊂ ker Γ, let f <sup>∈</sup> <sup>S</sup>. Since Γ is surjective, for arbitrary fixed <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> <sup>×</sup> <sup>G</sup> one can choose <sup>g</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>S</sup>[[⊥]] such that Γg <sup>=</sup> <sup>J</sup>Gϕ, with <sup>J</sup><sup>G</sup> as in (2.1.3). Since <sup>f</sup> <sup>∈</sup> <sup>S</sup> and <sup>g</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> it follows from (2.1.4) that

$$(\Gamma\widehat{f}, \widehat{\varphi})\_{\mathbb{S}^2} = \left(\Gamma\widehat{f}, \mathcal{J}\_{\mathbb{S}}^{-1}\Gamma\widehat{g}\right)\_{\mathbb{S}^2} = \left[\Gamma\widehat{f}, \Gamma\widehat{g}\right]\_{\mathbb{S}^2} = \left[\widehat{f}, \widehat{g}\right]\_{\mathbb{S}^2} = 0$$

for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>2, which leads to Γ<sup>f</sup> = 0. Thus, <sup>S</sup> <sup>⊂</sup> ker Γ has been shown. -

By means of a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> the intermediate extensions of S defined in Section 1.7 can be described via relations in the space G. In particular, the one-to-one correspondence in the next theorem preserves adjoints, which is a consequence of the abstract Green identity (2.1.4).

**Theorem 2.1.3.** Let S be a closed symmetric relation in H and let {G, Γ0, Γ1} be a boundary triplet for S∗. Then the following statements hold:

(i) there is a bijective correspondence between the set of intermediate extensions A<sup>Θ</sup> of S and the set of relations Θ in G, via

$$A\_{\Theta} := \{ \widehat{f} \in S^\* \, : \, \Gamma \widehat{f} \in \Theta \}; \tag{2.1.5}$$


$$\Theta \cap \Gamma(\{0\} \times \text{mul} \, S^\*) = \{0, 0\}. \tag{2.1.6}$$

Proof. (i) & (ii) The relation <sup>S</sup><sup>∗</sup> <sup>⊂</sup> <sup>H</sup><sup>2</sup> is equipped with the Hilbert space inner product of <sup>H</sup>2. Now let <sup>M</sup> <sup>⊂</sup> <sup>H</sup><sup>2</sup> be the orthogonal complement of <sup>S</sup> in <sup>S</sup>∗, so that S ⊕ M = S∗. Since ker Γ = S, the restriction Γ of Γ to M is an isomorphism between M and G × G. Hence Γ gives a one-to-one correspondence between the subspaces Hof M and the subspaces Θ of G × G via

$$
\Theta = \Gamma' H' \quad \text{or, equivalently,} \quad (\Gamma')^{-1} \Theta = H'. \tag{2.1.7}
$$

Clearly, this gives rise to a one-to-one correspondence between all intermediate extensions H of S and all subspaces H of M via H = S ⊕ H- , which is expressed in (2.1.5). Moreover, since Γ is an isomorphism it also follows from (2.1.7) that Θ = Γ- H- = Γ- H and hence the closure H of H corresponds to the closure Θ of Θ. This implies via (2.1.5) that A<sup>Θ</sup> = AΘ.

(iii) Let A<sup>Θ</sup> be defined by (2.1.5). It will be verified that

$$A\_{\Theta} = \ker\left(\Gamma\_1 - \Theta \Gamma\_0\right) \tag{2.1.8}$$

holds for any relation Θ in G. Note that (2.1.8) is clear in the special case that Θ is an operator, since Γf <sup>=</sup> {Γ0f, <sup>Γ</sup>1<sup>f</sup> } ∈ Θ means that ΘΓ0<sup>f</sup> = Γ1<sup>f</sup> . Now assume that Θ is a relation.

First the inclusion (⊂) in (2.1.8) will be shown. For this consider f <sup>∈</sup> <sup>A</sup>Θ. Hence, f <sup>∈</sup> <sup>S</sup><sup>∗</sup> and {Γ0f, <sup>Γ</sup>1<sup>f</sup> } ∈ Θ. Then {f, <sup>Γ</sup>0<sup>f</sup> } ∈ <sup>Γ</sup><sup>0</sup> gives {f, <sup>Γ</sup>1<sup>f</sup> } ∈ ΘΓ0. Since {f, <sup>Γ</sup>1<sup>f</sup> } ∈ <sup>Γ</sup><sup>1</sup> one finds {f, <sup>0</sup>} ∈ <sup>Γ</sup>1−ΘΓ0. In other words <sup>f</sup> <sup>∈</sup> ker (Γ1−ΘΓ0).

For the inclusion (⊃) in (2.1.8) consider f <sup>∈</sup> ker (Γ<sup>1</sup> <sup>−</sup> ΘΓ0). Then one has {f, <sup>0</sup>} ∈ <sup>Γ</sup><sup>1</sup> <sup>−</sup> ΘΓ<sup>0</sup> and hence there exists an element <sup>ψ</sup> such that {f, <sup>ψ</sup>} ∈ <sup>Γ</sup><sup>1</sup> and {f, <sup>ψ</sup>} ∈ ΘΓ0. Thus {f, <sup>ϕ</sup>} ∈ <sup>Γ</sup><sup>0</sup> and {ϕ, <sup>ψ</sup>} ∈ Θ for some <sup>ϕ</sup>. Since both Γ<sup>0</sup> and <sup>Γ</sup><sup>1</sup> are operators, one has <sup>ψ</sup> = Γ1<sup>f</sup> and <sup>ϕ</sup> = Γ0<sup>f</sup> , and therefore {Γ0f, <sup>Γ</sup>1<sup>f</sup> } ∈ Θ, that is, f <sup>∈</sup> <sup>A</sup>Θ.

(iv) To show that (AΘ)<sup>∗</sup> <sup>⊂</sup> <sup>A</sup>Θ<sup>∗</sup> , let <sup>g</sup> <sup>∈</sup> (AΘ)∗. Let <sup>ϕ</sup> <sup>∈</sup> Θ and choose <sup>f</sup> <sup>∈</sup> <sup>A</sup><sup>Θ</sup> such that Γf <sup>=</sup> <sup>ϕ</sup>. Then one has

$$\left[\left[\widehat{\varphi}, \Gamma \widehat{g}\right]\_{\mathfrak{G}^2} = \left[\Gamma \widehat{f}, \Gamma \widehat{g}\right]\_{\mathfrak{G}^2} = \left[\widehat{f}, \widehat{g}\right]\_{\mathfrak{G}^2} = 0, 1$$

which implies Γg <sup>∈</sup> <sup>Θ</sup>∗, that is, <sup>g</sup> <sup>∈</sup> <sup>A</sup>Θ<sup>∗</sup> . One concludes (AΘ)<sup>∗</sup> <sup>⊂</sup> <sup>A</sup>Θ<sup>∗</sup> . Since <sup>A</sup>Θ<sup>∗</sup> is closed by (ii), the inclusion AΘ<sup>∗</sup> ⊂ (AΘ)<sup>∗</sup> follows together with (ii) from

$$A\_{\Theta^\*} = (A\_{\Theta^\*})^{\*\*} \subset (A\_{\Theta^{\*\*}})^\* = (A\_{\overline{\Theta}})^\* = (\overline{A\_{\Theta}})^\* = (A\_{\Theta})^\*.$$

(v) This assertion is obvious from the correspondence in (2.1.5).

(vi) Let A<sup>Θ</sup> be an operator. Then clearly S is an operator. Assume that Γf <sup>∈</sup> <sup>Θ</sup> for some element f <sup>=</sup> {0, f- } ∈ S∗. Then f <sup>∈</sup> <sup>A</sup><sup>Θ</sup> and hence <sup>f</sup>- = 0, so that (2.1.6) holds.

Conversely, assume now that S is an operator and that (2.1.6) holds. If f <sup>=</sup> {0, f- } ∈ AΘ, then f- ∈ mul S<sup>∗</sup> and Γ{0, f- } ∈ Θ. Hence, Γ{0, f- } = {0, 0} and as S = ker Γ, it follows that {0, f- } ∈ S. This implies f- = 0. Therefore, A<sup>Θ</sup> is an operator. -

Due to the abstract Green identity (2.1.1), (2.1.2), or (2.1.4) some properties of intermediate extensions are preserved in the corresponding relations in G.

**Corollary 2.1.4.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>Θ</sup> be the extension of S in H corresponding to the relation Θ in G via (2.1.5). Then the following statements hold:


Proof. (i) This assertion follows immediately from the identity (see (1.8.4))

$$\operatorname{Im}\left(f',f\right) = \frac{1}{2}\left[\widehat{f},\widehat{f}\right]\_{\mathfrak{H}^2} = \frac{1}{2}\left[\Gamma\widehat{f},\Gamma\widehat{f}\right]\_{\mathfrak{G}^2} = \operatorname{Im}\left(\Gamma\_1\widehat{f},\Gamma\_0\widehat{f}\right),$$

where f <sup>=</sup> {f,f- } ∈ S∗, and (2.1.5).

(ii) According to Theorem 2.1.3 (v), for any two extensions A<sup>Θ</sup> and AΘ of S one has A<sup>Θ</sup> ⊂ AΘ if and only if Θ ⊂ Θ holds. Therefore, if A<sup>Θ</sup> is maximal dissipative, then Θ is dissipative because of (i) and if Θ is a dissipative extension of Θ in G, then AΘ is a dissipative extension of AΘ, so that A<sup>Θ</sup> = AΘ- . Hence, Θ = Θ and Θ is maximal dissipative. The converse direction is proved in exactly the same way. The statement for maximal accumulative extensions follows analogously.

(iii)–(v) These assertions follow from the previous items, and the fact that a relation is symmetric (self-adjoint) if and only if it is (maximal) dissipative and (maximal) accumulative. -

Let H and H be two closed intermediate extensions of S in H. Recall that H and H are disjoint if H ∩ H- = S, and that H and H are transversal if they are disjoint and <sup>H</sup> <sup>+</sup> <sup>H</sup>- = S∗; cf. Definition 1.7.6. If, in addition, the extensions H and Hare self-adjoint, then disjointness implies

$$S^\* = \text{clos}\,(H \stackrel{\frown}{+} H');$$

and in this case H and H are transversal if and only if <sup>H</sup> <sup>+</sup> <sup>H</sup> is closed; cf. Lemma 1.7.7. In a similar way the closed relations Θ and Θ in G, as intermediate extensions of the trivial symmetric relation {0, 0}, are disjoint if Θ ∩ Θ- = {0, 0} and transversal if they are disjoint and Θ + Θ -= G2.

**Lemma 2.1.5.** Let S be a closed symmetric relation in H and let {G, Γ0, Γ1} be a boundary triplet for S∗. Let Θ and Θbe relations in G. Then

$$A\_{\Theta} \cap A\_{\Theta'} = A\_{\Theta \cap \Theta'} \tag{2.1.9}$$

and

$$A\_{\Theta} \stackrel{\cdot}{\leftarrow} A\_{\Theta'} = A\_{\Theta \stackrel{\cdot}{\leftarrow} \Theta'}. \tag{2.1.10}$$

In particular, if A<sup>Θ</sup> and A<sup>Θ</sup> are closed or, equivalently, Θ and Θ are closed, then the following statements hold:


Proof. The identity (2.1.9) follows from

$$\begin{aligned} A\_{\Theta} \cap A\_{\Theta'} &= \left\{ \widehat{f} \in S^\* \,:\, \Gamma \widehat{f} \in \Theta \right\} \cap \left\{ \widehat{f} \in S^\* \,:\, \Gamma \widehat{f} \in \Theta' \right\} \\ &= \left\{ \widehat{f} \in S^\* \,:\, \Gamma \widehat{f} \in \Theta \cap \Theta' \right\} \\ &= A\_{\Theta \cap \cap \Theta'}, \end{aligned}$$

while the identity (2.1.10) follows from

$$
\Gamma(A\_{\Theta} \xrightarrow{\frown} A\_{\Theta'}) = \Gamma(A\_{\Theta}) \xrightarrow{\frown} \Gamma(A\_{\Theta'}) = \Theta \xrightarrow{\frown} \Theta'.
$$

In particular, (2.1.9) together with S = ker Γ shows that A<sup>Θ</sup> ∩ AΘ- = S if and only if Θ ∩ Θ- = {0, 0}, while (2.1.9) and (2.1.10) show that A<sup>Θ</sup> ∩ AΘ- = S and <sup>A</sup><sup>Θ</sup> <sup>+</sup> <sup>A</sup>Θ- = S<sup>∗</sup> if and only if Θ ∩ Θ- <sup>=</sup> {0, <sup>0</sup>} and Θ + Θ - = G2. This completes the proof. -

**Corollary 2.1.6.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> and assume that dim G < ∞. If A<sup>Θ</sup> and AΘ are selfadjoint extensions of S which are disjoint, then they are transversal.

Let S be a closed symmetric relation in H and let {G, Γ0, Γ1} be a boundary triplet for S∗. There are two special extensions of S which will be frequently used in the following; they are defined by

$$A\_0 := \ker \Gamma\_0 \quad \text{and} \quad A\_1 := \ker \Gamma\_1. \tag{2.1.11}$$

It is clear that A<sup>0</sup> and A<sup>1</sup> are self-adjoint extensions of S, since they correspond to the self-adjoint parameters Θ in G in (2.1.5) given by

$$
\Theta = \{0\} \times \mathcal{G} \quad \text{and} \quad \Theta = \mathcal{G} \times \{0\}, \tag{2.1.12}
$$

respectively. Furthermore, the representations in (2.1.12) show that the self-adjoint extensions A<sup>0</sup> and A<sup>1</sup> are transversal; cf. Lemma 2.1.5. Note also in this context that a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> can only exist if the defect numbers of the closed symmetric relation S coincide (since it admits the self-adjoint extensions A<sup>0</sup> and A<sup>1</sup> in (2.1.11)); a more detailed discussion on the existence and uniqueness of boundary triplets will be provided in Section 2.4 and Section 2.5.

As S = ker Γ, it follows from von Neumann's decomposition Theorem 1.7.11 that Γ is an isomorphism from <sup>N</sup> <sup>λ</sup>(S∗) <sup>+</sup> <sup>N</sup> <sup>λ</sup>(S∗), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, onto <sup>G</sup>2. Due to the definitions in (2.1.11) a similar observation can be made for the components Γ<sup>0</sup> and Γ1.

**Lemma 2.1.7.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>0</sup> = ker Γ<sup>0</sup> and A<sup>1</sup> = ker Γ1. Then the adjoint S<sup>∗</sup> admits the direct sum decompositions

$$\begin{aligned} S^\* &= A\_0 \hat{+} \hat{\mathfrak{R}}\_{\lambda}(S^\*), \quad \lambda \in \rho(A\_0), \\ S^\* &= A\_1 \hat{+} \hat{\mathfrak{R}}\_{\lambda}(S^\*), \quad \lambda \in \rho(A\_1). \end{aligned} \tag{2.1.13}$$

In particular, the restrictions <sup>Γ</sup>0 <sup>N</sup> <sup>λ</sup>(S∗) and <sup>Γ</sup>1 <sup>N</sup> <sup>λ</sup>(S∗) are isomorphisms from <sup>N</sup> <sup>λ</sup>(S∗) onto <sup>G</sup> for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) and <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A1), respectively.

Proof. As A<sup>0</sup> and A<sup>1</sup> are self-adjoint, the direct sum decompositions (2.1.13) hold by Corollary 1.7.5. Since Γ<sup>0</sup> and Γ<sup>1</sup> map S<sup>∗</sup> onto G, and A<sup>0</sup> and A<sup>1</sup> are their respective kernels, it is clear that the restrictions Γ0 <sup>N</sup> <sup>λ</sup>(S∗) and Γ1 <sup>N</sup> <sup>λ</sup>(S∗) are isomorphisms from <sup>N</sup> <sup>λ</sup>(S∗) onto <sup>G</sup>. -

In the rest of the text the self-adjoint extension A<sup>0</sup> = ker Γ<sup>0</sup> will often serve as a point of reference due to the corresponding representation {0} × G in the parameter space G. In the next proposition it is shown that A<sup>0</sup> and a given closed extension A<sup>Θ</sup> are disjoint (transversal) if and only if the parameter Θ is a (bounded everywhere defined) operator.

**Proposition 2.1.8.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, and let A<sup>Θ</sup> be the closed intermediate extension of S in H corresponding to the closed relation Θ in G via (2.1.5). Then the following statements hold:


Proof. Apply Lemma 2.1.5 to the self-adjoint extension A<sup>0</sup> = ker Γ<sup>0</sup> that corresponds to {0} × G.

(i) A<sup>Θ</sup> and A<sup>0</sup> are disjoint if and only if Θ ∩ ({0} × G) = {0, 0}, which is the same as saying that mul Θ = {0}.

(ii) A<sup>Θ</sup> and A<sup>0</sup> are transversal if and only if

$$
\Theta \cap (\{0\} \times \mathcal{G}) = \{0, 0\} \quad \text{and} \quad \Theta \stackrel{\rightarrow}{+} (\{0\} \times \mathcal{G}) = \mathcal{G} \times \mathcal{G},
$$

which is the same as saying that mul Θ = {0} and dom Θ = G. By the closed graph theorem, the last two conditions are equivalent to Θ <sup>∈</sup> **<sup>B</sup>**(G). -

Let S be a closed symmetric relation in H with equal defect numbers and let H be a self-adjoint extension of S. Later it will be shown that there exists a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that ker Γ<sup>0</sup> concides with H; cf. Theorem 2.4.1. Furthermore, it will be shown that for a pair of self-adjoint extensions of S which are transversal, there exists a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that ker Γ<sup>0</sup>

and ker Γ<sup>1</sup> coincide with this pair; cf. Theorem 2.5.9. The notion of boundary triplet is not unique; in fact, a parametrization of all possible boundary triplets will be provided in Section 2.5.

The following theorem is of a different nature. It can be used to prove that a given relation T is the adjoint of a symmetric relation S.

**Theorem 2.1.9.** Let T be a relation in H, let G be a Hilbert space, and assume that

$$
\Gamma = \begin{pmatrix} \Gamma\_0 \\ \Gamma\_1 \end{pmatrix} : T \to \mathcal{G} \times \mathcal{G}
$$

is a linear mapping such that the following conditions are satisfied:


$$(f',g)\_{\mathfrak{H}} - (f,g')\_{\mathfrak{H}} = (\Gamma\_1 \dot{f}, \Gamma\_0 \hat{g})\_{\mathfrak{G}} - (\Gamma\_0 \dot{f}, \Gamma\_1 \hat{g})\_{\mathfrak{G}}.$$

Then S := ker Γ is a closed symmetric relation in H such that S<sup>∗</sup> = T and {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ0.

Proof. First note that condition (iii) implies that ker Γ<sup>0</sup> is symmetric. To see this, let f, <sup>g</sup> <sup>∈</sup> ker Γ0. Then, by condition (iii),

$$\left[\widehat{f}, \widehat{g}\right] = \left[\Gamma \widehat{f}, \Gamma \widehat{g}\right] = 0,$$

and hence ker Γ<sup>0</sup> is a symmetric relation in H. Then (i) gives A<sup>0</sup> = A<sup>∗</sup> <sup>0</sup> ⊂ ker Γ0, which implies

$$\ker \Gamma\_0 \subset (\ker \Gamma\_0)^\* \subset A\_0^\* = A\_0 \subset \ker \Gamma\_0.$$

Therefore, ker Γ<sup>0</sup> = A<sup>0</sup> is self-adjoint in H. Moreover, S := ker Γ ⊂ ker Γ<sup>0</sup> is a symmetric relation in H.

It will be shown that

$$S = T^\*,\tag{2.1.14}$$

so that, in particular, S is closed. To see (⊂) in (2.1.14), let f <sup>∈</sup> <sup>S</sup> = ker Γ. For any <sup>g</sup> <sup>∈</sup> <sup>T</sup> one has [[f, <sup>g</sup>]] = [[Γf, <sup>Γ</sup>g]] = 0, so that <sup>f</sup> <sup>∈</sup> <sup>T</sup> <sup>∗</sup>. To see (⊃) in (2.1.14), let f <sup>∈</sup> <sup>T</sup> <sup>∗</sup>. Since <sup>A</sup><sup>0</sup> is self-adjoint and <sup>A</sup><sup>0</sup> <sup>⊂</sup> <sup>T</sup>, it follows that <sup>T</sup> <sup>∗</sup> <sup>⊂</sup> <sup>A</sup><sup>0</sup> = ker Γ0, so that Γ0f = 0. For arbitrary <sup>g</sup> <sup>∈</sup> <sup>T</sup> it therefore follows that

$$0 = \left[\widehat{f}, \widehat{g}\right] = \left[\Gamma \widehat{f}, \Gamma \widehat{g}\right] = -i(\Gamma\_1 \widehat{f}, \Gamma\_0 \widehat{g}).$$

From condition (ii) one concludes ran Γ<sup>0</sup> = G and this leads to Γ1f = 0. Hence, f <sup>∈</sup> ker Γ<sup>0</sup> <sup>∩</sup> ker Γ<sup>1</sup> <sup>=</sup> <sup>S</sup>. Therefore, (2.1.14) is proved.

It follows from S = T <sup>∗</sup> that S<sup>∗</sup> = T ∗∗ = T. Hence, it remains to show that T is closed. Let (f <sup>n</sup>) be a sequence in T converging to f . It suffices to show that f <sup>∈</sup> <sup>T</sup>. Let <sup>ψ</sup> <sup>∈</sup> <sup>G</sup><sup>2</sup> and let <sup>g</sup> <sup>∈</sup> <sup>T</sup> be such that <sup>ψ</sup> <sup>=</sup> <sup>J</sup>−<sup>1</sup> <sup>G</sup> <sup>Γ</sup>g (here condition (ii) is being used). Using the continuity of the indefinite inner product [[·, ·]] (see Section 1.8) one obtains

$$\left(\Gamma\widehat{f}\_n,\widehat{\psi}\right) = \left(\Gamma\widehat{f}\_n,\mathcal{J}\_\mathcal{G}^{-1}\Gamma\widehat{g}\right) = \left[\Gamma\widehat{f}\_n,\Gamma\widehat{g}\right] = \left[\widehat{f}\_n,\widehat{g}\right] \to \left[\widehat{f},\widehat{g}\right].\tag{2.1.15}$$

This shows that Γf <sup>n</sup> is a weak Cauchy sequence in G, hence weakly bounded and thus bounded. It follows that there exists a subsequence, again denoted by Γf n, which converges weakly to some <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> <sup>×</sup> <sup>G</sup>. Now let <sup>h</sup> <sup>∈</sup> <sup>T</sup> be such that Γ <sup>h</sup> <sup>=</sup> <sup>ϕ</sup> (again condition (ii) is being used). Choose <sup>g</sup> <sup>∈</sup> <sup>T</sup> and let, as above, <sup>ψ</sup> <sup>=</sup> <sup>J</sup>−<sup>1</sup> <sup>G</sup> <sup>Γ</sup>g, so that (2.1.15) remains valid. Then (2.1.15) implies

$$\mathbb{E}\left[\widehat{f},\widehat{g}\right] = \lim\_{n\to\infty} (\Gamma\widehat{f}\_n,\widehat{\psi}) = (\widehat{\varphi},\widehat{\psi}) = \left(\Gamma\widehat{h},\mathcal{J}\_{\mathcal{G}}^{-1}\Gamma\widehat{g}\right) = \left[\Gamma\widehat{h},\Gamma\widehat{g}\right] = \left[\widehat{h},\widehat{g}\right],$$

and therefore [[f − h, <sup>g</sup>]] = 0. Since <sup>g</sup> <sup>∈</sup> <sup>T</sup>, one concludes that <sup>f</sup> − h ∈ T <sup>∗</sup> = S ⊂ T. Now h ∈ T implies that f <sup>∈</sup> <sup>T</sup>. Therefore, <sup>T</sup> is closed and it follows that <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>T</sup>.

By conditions (ii) and (iii) {G, Γ0, Γ1} is a boundary triplet for S∗. Above it was also shown that A<sup>0</sup> = ker Γ0. -

## **2.2 Boundary value problems**

Let S be a closed symmetric relation in H and let {G, Γ0, Γ1} be a boundary triplet for S∗. Due to Theorem 2.1.3 one may think of the intermediate extensions of S being parametrized by the relations in the space G; for this reason the space G will often be called the boundary space or parameter space associated with the boundary triplet. Let Θ be a closed relation in G and let A<sup>Θ</sup> be the corresponding closed extension of S in H via (2.1.5):

$$A\_{\Theta} = \left\{ \widehat{f} \in S^\* \, : \, \Gamma \widehat{f} \in \Theta \right\} = \ker \left( \Gamma\_1 - \Theta \Gamma\_0 \right). \tag{2.2.1}$$

Recall from Section 1.10 that any closed relation Θ in G has a parametric representation of the form Θ = {A, B}, i.e.,

$$\Theta = \left\{ \{ \mathcal{A}e, \mathcal{B}e \} : e \in \mathcal{E} \right\} \tag{2.2.2}$$

with some operators A, B ∈ **B**(E, G) and a Hilbert space E. Likewise, since Θ<sup>∗</sup> is closed, it has a representation of the form

$$\Theta^\* = \left\{ \{ \mathbb{C}e', \mathbb{D}e' \} : e' \in \mathbb{S}' \right\} \tag{2.2.3}$$

with some operators C, D ∈ **B**(E- , G) and a Hilbert space E- . Thus, (2.2.3) gives

$$\Theta = \{ \{ \varphi, \varphi' \} \in \mathcal{G} \times \mathcal{G} : \mathcal{D}^\* \varphi = \mathcal{C}^\* \varphi' \}. \tag{2.2.4}$$

Therefore, it follows that A<sup>Θ</sup> in (2.2.1) can be written as

$$A\_{\Theta} = \{ \widehat{f} \in S^\* \, : \, \mathcal{D}^\* \Gamma\_0 \widehat{f} = \mathcal{C}^\* \Gamma\_1 \widehat{f} \}. \tag{2.2.5}$$

In the following it will be shown how the pair {C, D} in (2.2.3) and (2.2.5) can be expressed in terms of the original pair {A, B} in (2.2.2). The main result is contained in the next proposition.

Recall that the condition that Θ = {A, B} is closed with some A, B ∈ **B**(E, G) is equivalent to the condition that ran (A∗A + B∗B) is closed in E; cf. Proposition 1.10.3. In fact, in the case where Θ is closed one may assume that the representing pair {A, B} satisfies the normalization condition A∗A + B∗B = I; cf. Proposition 1.10.3.

**Proposition 2.2.1.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>Θ</sup> be the closed extension of S in H corresponding to the closed relation Θ in G via (2.1.5). Assume that Θ has the representation Θ = {A, B} with A, B ∈ **B**(E, G) such that

$$
\mathcal{A}^\* \mathcal{A} + \mathcal{B}^\* \mathcal{B} = I. \tag{2.2.6}
$$

Then the intermediate extension A<sup>Θ</sup> in (2.2.1) can be described as

$$A\_{\Theta} = \left\{ \widehat{f} \in S^\* : \begin{pmatrix} \mathcal{B}\mathcal{A}^\* \\ I - \mathcal{A}\mathcal{A}^\* \end{pmatrix} \Gamma\_0 \widehat{f} = \begin{pmatrix} I - \mathcal{B}\mathcal{B}^\* \\ \mathcal{A}\mathcal{B}^\* \end{pmatrix} \Gamma\_1 \widehat{f} \right\}. \tag{2.2.7}$$

Proof. By Proposition 1.10.10, condition (2.2.6) implies that the relation Θ is given by

$$\Theta = \left\{ \{\varphi, \varphi'\} \in \mathfrak{G}^2 : \begin{pmatrix} \mathcal{B}\mathcal{A}^\* \\ I - \mathcal{A}\mathcal{A}^\* \end{pmatrix} \varphi = \begin{pmatrix} I - \mathcal{B}\mathcal{B}^\* \\ \mathcal{A}\mathcal{B}^\* \end{pmatrix} \varphi' \right\}.$$

Then (2.2.7) follows from (2.1.5). -

In the next proposition it will be assumed, in addition, that ρ(Θ) = ∅. The following result is a reformulation of Theorem 1.10.5 and formula (2.1.5).

**Proposition 2.2.2.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>Θ</sup> be the closed extension of S in H corresponding to the closed relation Θ in G via (2.1.5). Then μ ∈ ρ(Θ) if and only if Θ has the representation Θ = {A, <sup>B</sup>} with <sup>A</sup>, <sup>B</sup> <sup>∈</sup> **<sup>B</sup>**(G) such that (<sup>B</sup> <sup>−</sup> <sup>μ</sup>A)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G). Moreover, the pair {A, B} may be chosen such that Θ<sup>∗</sup> = {A∗, B∗}. In this case the intermediate extension A<sup>Θ</sup> in (2.2.1) can be described as

$$A\_{\Theta} = \left\{ \widehat{f} \in S^\* \, : \, \mathcal{B}\Gamma\_0 \widehat{f} = \mathcal{A}\Gamma\_1 \widehat{f} \right\}. \tag{2.2.8}$$

For μ ∈ ρ(Θ) it follows from (1.10.9) that in Proposition 2.2.2 one can choose

$$\mathcal{A} = (\Theta - \mu)^{-1} \quad \text{and} \quad \mathcal{B} = I + \mu(\Theta - \mu)^{-1}. \tag{2.2.9}$$

$$\square$$

In the case that <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one may also choose

A = I − Cμ[Θ] and B = μ − μCμ[Θ] (2.2.10)

by (1.10.10), where C<sup>μ</sup> denotes the Cayley transform.

The next corollary is a translation of Corollary 2.1.4 and Corollary 1.10.8. In each of the cases in this corollary one may apply Proposition 2.2.2 by choosing the pair {A, <sup>B</sup>} as in (2.2.9) or (2.2.10) with <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup>, or <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−, respectively.

**Corollary 2.2.3.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>Θ</sup> be the closed extension of S in H corresponding to the closed relation Θ in G via (2.1.5). Assume that Θ = {A, B}. Then the following statements hold:

(i) A<sup>Θ</sup> is self-adjoint if and only if

Im (A∗B)=0 and (B − μA) <sup>−</sup><sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G)

for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−; (ii) A<sup>Θ</sup> is maximal dissipative if and only if

> Im (A∗B) ≥ 0 and (B − μA) <sup>−</sup><sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G)

for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−;

(iii) A<sup>Θ</sup> is maximal accumulative if and only if

Im (A∗B) ≤ 0 and (B − μA) <sup>−</sup><sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G)

for some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>+.

In the case that the representation Θ = {A, B} is chosen so that Θ<sup>∗</sup> = {A∗, B∗}, the extension A<sup>Θ</sup> is given by (2.2.8).

Now the converse question will be addressed. Let A be a closed extension of S given in terms of boundary conditions. The problem is to determine a corresponding parameter Θ in G such that A = AΘ.

**Proposition 2.2.4.** Let S be a closed symmetric relation in H, and let {G, Γ0, Γ1} be a boundary triplet for S∗. Assume that F is a Hilbert space, M, N ∈ **B**(G, F), and that, without loss of generality, the space F is minimal:

$$
\mathfrak{F} = \overline{\text{span}}\left\{ \overline{\text{ran}}\,\mathcal{M}, \overline{\text{ran}}\,\mathcal{N} \right\}.
$$

Furthermore, assume that <sup>M</sup> <sup>−</sup> <sup>μ</sup><sup>N</sup> <sup>∈</sup> **<sup>B</sup>**(G, <sup>F</sup>) is bijective for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> and let A be an intermediate extension of S of the form

$$A = \{ \widehat{f} \in S^\* \, : \, \mathcal{M} \Gamma\_0 \widehat{f} = \mathcal{N} \Gamma\_1 \widehat{f} \}. \tag{2.2.11}$$

Then A is closed and A = AΘ, where the parameter Θ = {A, B} is given by

$$\{\mathcal{A},\mathcal{B}\} = \left\{ (\mathcal{M} - \mu \mathcal{N})^{-1} \mathcal{N}, (\mathcal{M} - \mu \mathcal{N})^{-1} \mathcal{M} \right\}.$$

Proof. First observe that the intermediate extension A in (2.2.11) is closed since M, N ∈ **B**(G, F). Moreover, A corresponds to the closed relation Θ in G given by

$$\Theta = \left\{ \{\varphi, \varphi'\} \in \mathcal{G} \times \mathcal{G} : \mathfrak{M}\varphi = \mathfrak{N}\varphi' \right\}.$$

Now the assertion follows from Proposition 1.10.7. -

Let again Θ be a closed relation in G and let A<sup>Θ</sup> be the corresponding closed extension in H via (2.1.5). Assume, in addition, that Θ admits an orthogonal decomposition

$$
\Theta = \Theta\_{\mathrm{op}} \dot{\oplus} \Theta\_{\mathrm{mul}}, \qquad \mathcal{G} = \mathcal{G}\_{\mathrm{op}} \oplus \mathcal{G}\_{\mathrm{mul}},
$$

into a (not necessarily densely defined) operator part Θop acting in the Hilbert space Gop = dom Θ<sup>∗</sup> = (mul Θ)<sup>⊥</sup> and a multivalued part Θmul = {0} × mul Θ in the Hilbert space Gmul = mul Θ; cf. Theorem 1.3.16 and the discussion following it. Recall from Theorem 1.4.11, Theorem 1.5.1, and Theorem 1.6.12 that any closed symmetric, self-adjoint, (maximal) dissipative, or (maximal) accumulative relation Θ in G gives rise to such a decomposition. If Pop denotes the orthogonal projection in G onto Gop, then the closed extension A<sup>Θ</sup> in (2.1.5) has the form

$$A\_{\Theta} = \left\{ \widehat{f} \in S^\* : \Theta\_{\mathrm{op}} P\_{\mathrm{op}} \Gamma\_0 \widehat{f} = P\_{\mathrm{op}} \Gamma\_1 \widehat{f}, \ (I\_{\mathbb{S}} - P\_{\mathrm{op}}) \Gamma\_0 \widehat{f} = 0 \right\}. \tag{2.2.12}$$

Note that this abstract boundary condition also requires PopΓ0f <sup>∈</sup> dom Θop.

## **2.3 Associated** *γ***-fields and Weyl functions**

Let S be a closed symmetric relation in the Hilbert space H and let {G, Γ0, Γ1} be a boundary triplet for <sup>S</sup>∗. Recall from Lemma 2.1.7 that Γ<sup>0</sup> maps <sup>N</sup> <sup>λ</sup>(S∗) bijectively onto G when λ ∈ ρ(A0). Hence, the inverse mapping

$$
\widehat{\gamma}(\lambda) := \left(\Gamma\_0 \upharpoonright \widehat{\mathfrak{N}}\_{\lambda}(S^\*)\right)^{-1}, \quad \lambda \in \rho(A\_0),
$$

maps <sup>G</sup> bijectively onto <sup>N</sup> <sup>λ</sup>(S∗). Let <sup>π</sup><sup>1</sup> be the orthogonal projection from <sup>H</sup> <sup>×</sup> <sup>H</sup> onto <sup>H</sup> × {0}. Then <sup>π</sup><sup>1</sup> maps <sup>N</sup> <sup>λ</sup>(S∗) bijectively onto <sup>N</sup>λ(S∗).

**Definition 2.3.1.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>0</sup> = ker Γ0. Then

$$\rho(A\_0) \ni \lambda \mapsto \gamma(\lambda) = \left\{ \{ \Gamma\_0 \widehat{f}\_\lambda, f\_\lambda \} \, : \, \widehat{f}\_\lambda \in \widehat{\mathfrak{N}}\_\lambda(S^\*) \right\} \tag{2.3.1}$$

or, equivalently,

$$
\rho(A\_0) \ni \lambda \mapsto \gamma(\lambda) = \pi\_1 \widehat{\gamma}(\lambda) = \pi\_1 \left( \Gamma\_0 \restriction \widehat{\mathfrak{N}}\_{\lambda}(S^\*) \right)^{-1},
$$

is called the γ-field associated with the boundary triplet {G, Γ0, Γ1}.

$$\square$$

The main properties of the γ-field will now be discussed.

**Proposition 2.3.2.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>0</sup> = ker Γ0. Then the following statements hold for the corresponding γ-field γ:


$$\gamma(\lambda) = \left(I + (\lambda - \mu)(A\_0 - \lambda)^{-1}\right)\gamma(\mu);$$

(iii) the operator function γ : ρ(A0) → **B**(G, H), λ → γ(λ), is holomorphic, i.e., the limit

$$\frac{d}{d\mu}\gamma(\mu) = \lim\_{\lambda \to \mu} \frac{\gamma(\lambda) - \gamma(\mu)}{\lambda - \mu}$$

exists for all μ ∈ ρ(A0) in **B**(G, H);

(iv) for all λ ∈ ρ(A0) the operator γ(λ)<sup>∗</sup> ∈ **B**(H, G) is given by

$$\gamma(\lambda)^\* h = \Gamma\_1 \left\{ (A\_0 - \overline{\lambda})^{-1} h, \left( I + \overline{\lambda} (A\_0 - \overline{\lambda})^{-1} \right) h \right\}, \quad h \in \mathfrak{H}, \tag{2.3.2}$$

and ker γ(λ)<sup>∗</sup> = (Nλ(S∗))<sup>⊥</sup> = ran (S − λ) holds. Moreover, one has

$$\Gamma\left\{ (A\_0 - \overline{\lambda})^{-1} h, \left( I + \overline{\lambda} (A\_0 - \overline{\lambda})^{-1} \right) h \right\} = \{ 0, \gamma(\lambda)^\* h \}, \quad h \in \mathfrak{H}. \tag{2.3.3}$$

Proof. (i) Let <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0). Since the restriction of Γ<sup>0</sup> to <sup>N</sup> <sup>λ</sup>(S∗) is an isomorphism from <sup>N</sup> <sup>λ</sup>(S∗) onto <sup>G</sup> (see Lemma 2.1.7), while <sup>π</sup><sup>1</sup> is an isomorphism from <sup>N</sup> <sup>λ</sup>(S∗) onto <sup>N</sup>λ(S∗), it follows from Definition 2.3.1 that the mapping <sup>γ</sup>(λ) is an isomorphism from G onto Nλ(S∗). From this it is also clear that γ(λ) ∈ **B**(G, H).

(ii) Let λ, μ ∈ ρ(A0) and let ϕ ∈ G. Then there exists f <sup>μ</sup> <sup>=</sup> {fμ, μfμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗) such that ϕ = Γ0f <sup>μ</sup> and hence <sup>f</sup><sup>μ</sup> <sup>=</sup> <sup>γ</sup>(μ)ϕ. Due to <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>A</sup><sup>0</sup> <sup>+</sup> <sup>N</sup> <sup>λ</sup>(S∗) there exist h ∈ A<sup>0</sup> and f <sup>λ</sup> <sup>=</sup> {fλ, λfλ} ∈ <sup>N</sup> <sup>λ</sup>(S∗) such that

$$
\widehat{f}\_{\mu} = \widehat{h} + \widehat{f}\_{\lambda}.
$$

Observe that f <sup>λ</sup> − f <sup>μ</sup> <sup>=</sup> <sup>−</sup> h ∈ A0, which gives Γ0f <sup>λ</sup> = Γ0f <sup>μ</sup>, so that Γ0f <sup>λ</sup> = ϕ and f<sup>λ</sup> = γ(λ)ϕ. Moreover, this observation also shows that for some g ∈ H

$$\{f\_{\lambda}, \lambda f\_{\lambda}\} = \{f\_{\mu}, \mu f\_{\mu}\} + \left\{(A\_0 - \lambda)^{-1}g, \left(I + \lambda(A\_0 - \lambda)^{-1}\right)g\right\}.$$

Hence, {fλ, <sup>0</sup>} <sup>=</sup> {fμ,(<sup>μ</sup> <sup>−</sup> <sup>λ</sup>)fμ} <sup>+</sup> {(A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1g, g}, so that <sup>g</sup> = (<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)fμ. Therefore, <sup>f</sup><sup>λ</sup> <sup>=</sup> <sup>f</sup><sup>μ</sup> + (<sup>λ</sup> <sup>−</sup> <sup>μ</sup>)(A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1fμ, which implies

$$\gamma(\lambda)\varphi = \left(I + (\lambda - \mu)(A\_0 - \lambda)^{-1}\right)\gamma(\mu)\varphi.$$

(iii) Fix some μ ∈ ρ(A0). Then it follows from (ii) and the fact that the mapping <sup>λ</sup> → (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a holomorphic operator function with values in **<sup>B</sup>**(H) that λ → γ(λ) is a holomorphic operator function on ρ(A0) with values in **B**(G, H).

(iv) Fix <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) and let <sup>h</sup> <sup>∈</sup> <sup>H</sup>. Then there exists <sup>k</sup> <sup>=</sup> {k, k- } ∈ A<sup>0</sup> with h = k- − λk. Let ϕ ∈ G; then γ(λ)ϕ = f<sup>λ</sup> for some f<sup>λ</sup> ∈ Nλ(S∗). Hence, with the abstract Green identity for <sup>k</sup> <sup>=</sup> {k, k- } and f <sup>λ</sup> <sup>=</sup> {fλ, λfλ} it follows from Γ<sup>0</sup><sup>k</sup> = 0 that

$$\begin{aligned} \left(\varphi,\gamma(\lambda)^{\*}h\right) &= \left(\gamma(\lambda)\varphi,k'-\lambda k\right) \\ &= \left(f\_{\lambda},k'-\overline{\lambda}k\right) \\ &= -\left(\left(\lambda f\_{\lambda},k\right)-\left(f\_{\lambda},k'\right)\right) \\ &= -\left(\left(\Gamma\_{1}\widehat{f}\_{\lambda},\Gamma\_{0}\widehat{k}\right)-\left(\Gamma\_{0}\widehat{f}\_{\lambda},\Gamma\_{1}\widehat{k}\right)\right) \\ &= \left(\Gamma\_{0}\widehat{f}\_{\lambda},\Gamma\_{1}\widehat{k}\right) \\ &= \left(\varphi,\Gamma\_{1}\widehat{k}\right), \end{aligned}$$

which implies

$$\gamma(\lambda)^\* h = \Gamma\_1 \widehat{k} = \Gamma\_1 \left\{ (A\_0 - \overline{\lambda})^{-1} h, \left( I + \overline{\lambda} (A\_0 - \overline{\lambda})^{-1} \right) h \right\}.$$

The identity ker γ(λ)<sup>∗</sup> = (Nλ(S∗))<sup>⊥</sup> follows from ran γ(λ) = Nλ(S∗). Furthermore, the identity (Nλ(S∗))<sup>⊥</sup> = ran (S − λ) is clear and (2.3.3) follows from (2.3.2) and

$$\left\{ (A\_0 - \overline{\lambda})^{-1} h, \left( I + \overline{\lambda} (A\_0 - \overline{\lambda})^{-1} \right) h \right\} \in A\_0 = \ker \Gamma\_0.$$

This completes the proof. -

In the case where the symmetric relation S is a densely defined symmetric operator and {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> with boundary mappings Γ<sup>0</sup> and Γ<sup>1</sup> defined on dom S<sup>∗</sup> (see the text below Definition 2.1.1 and (2.1.2)) the formula for the adjoint γ(λ)<sup>∗</sup> of the corresponding γ-field in Proposition 2.3.2 (iv) has the simpler form

$$\gamma(\lambda)^\* h = \Gamma\_1 (A\_0 - \overline{\lambda})^{-1} h, \qquad \lambda \in \rho(A\_0), \ h \in \mathfrak{H}.$$

According to Proposition 2.3.2 (iv), the action of Γ<sup>1</sup> on a general element of A<sup>0</sup> is expressed in terms of the operator γ(λ)∗. The form of this action is particularly simple on eigenelements of A0.

**Corollary 2.3.3.** Let <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) and assume that {h, xh} ∈ <sup>A</sup><sup>0</sup> with <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Then

$$
\Gamma\_1 \{ h, xh \} = (x - \lambda) \gamma(\lambda)^\* h.
$$

If {0, h} ∈ A0, then

$$
\Gamma\_1 \{ 0, h \} = \gamma(\lambda)^\* h.
$$

Proof. Let <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) and let {h, xh} ∈ <sup>A</sup><sup>0</sup> with <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Then

$$h = \left(A\_0 - \overline{\lambda}\right)^{-1} (x - \overline{\lambda}) h,$$

which together with Proposition 2.3.2 (iv) leads to

$$\begin{aligned} (x - \overline{\lambda})\gamma(\lambda)^\*h &= (x - \overline{\lambda})\Gamma\_1\{ (A\_0 - \overline{\lambda})^{-1}h, \left( I + \overline{\lambda}(A\_0 - \overline{\lambda})^{-1} \right)h \} \\ &= \Gamma\_1\{ h, xh \}. \end{aligned}$$

If {0, h} ∈ <sup>A</sup>0, then <sup>h</sup> <sup>∈</sup> ker (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and the expression for Γ1{0, h} follows directly from Proposition 2.3.2 (iv). -

The definition and properties of the γ-field now give rise to the notion of Weyl function. It is defined, as in the case of the γ-field, for a closed symmetric relation S in terms of the boundary triplet for S<sup>∗</sup> and the eigenspaces of S∗.

**Definition 2.3.4.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>0</sup> = ker Γ0. Then

$$\rho(A\_0) \ni \lambda \mapsto M(\lambda) = \left\{ \{ \Gamma\_0 \widehat{f}\_\lambda, \Gamma\_1 \widehat{f}\_\lambda \} \, : \, \widehat{f}\_\lambda \in \widehat{\mathfrak{N}}\_\lambda(S^\*) \right\} \tag{2.3.4}$$

or, equivalently,

$$
\rho(A\_0) \ni \lambda \mapsto M(\lambda) = \Gamma\_1 \widehat{\gamma}(\lambda) = \Gamma\_1 \left( \Gamma\_0 \restriction \widehat{\mathfrak{N}}\_{\lambda}(S^\*) \right)^{-1},
$$

is called the Weyl function associated with the boundary triplet {G, Γ0, Γ1}.

Here is a simple example of a Weyl function for a trivial symmetric relation S in H. Note that in this example one has G = H, i.e., the corresponding boundary triplet maps onto H × H; this situation is not typical in standard applications; cf. Chapters 6, 7, and 8.

**Example 2.3.5.** Let S = {0, 0} be the trivial symmetric relation in H. It is clear that <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>H</sup> <sup>×</sup> <sup>H</sup> and <sup>N</sup>λ(S∗) = <sup>H</sup> for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Now define

$$
\Gamma\_0 \widehat{f} = f' \quad \text{and} \quad \Gamma\_1 \widehat{f} = -f, \quad \widehat{f} = \{f, f'\} \in S^\*,
$$

so that Γ : S<sup>∗</sup> → H × H is surjective and (2.1.1) is satisfied. Hence, {H, Γ0, Γ1} is a boundary triplet for S∗. Note that

$$A\_0 = \ker \Gamma\_0 = \mathfrak{H} \times \{0\}$$

is a self-adjoint extension of <sup>S</sup> with <sup>ρ</sup>(A0) = <sup>C</sup>\{0}, <sup>σ</sup>(A0) = {0}, and <sup>N</sup>0(A0) = <sup>H</sup>. It follows from Definition 2.3.1 and Definition 2.3.4 that the γ-field and the Weyl function are given by γ(λ) = (1/λ)I and M(λ) = −(1/λ)I, respectively.

Next some elementary properties of the Weyl function are discussed. Recall that the real part and imaginary part of a bounded operator T ∈ **B**(G) are defined as Re T = <sup>1</sup> <sup>2</sup> (<sup>T</sup> <sup>+</sup> <sup>T</sup> <sup>∗</sup>) and Im <sup>T</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup><sup>i</sup> (T − T <sup>∗</sup>), respectively.

**Proposition 2.3.6.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>0</sup> = ker Γ0. Then the following statements hold for the corresponding γ-field γ and Weyl function M:


$$
\Gamma \widehat{\gamma}(\lambda) \varphi = \left\{ \Gamma\_0 \widehat{\gamma}(\lambda) \varphi, \Gamma\_1 \widehat{\gamma}(\lambda) \varphi \right\} = \{ \varphi, M(\lambda) \varphi \},
$$

for every ϕ ∈ G;

(iii) for all λ, μ ∈ ρ(A0) the identity

$$M(\lambda) - M(\mu)^\* = (\lambda - \overline{\mu})\gamma(\mu)^\*\gamma(\lambda)$$

holds, and, in particular, the symmetry condition M(λ)<sup>∗</sup> = M(λ) holds for all λ ∈ ρ(A0);


$$M(\lambda) = \operatorname{Re} M(\lambda\_0) + \gamma(\lambda\_0)^\* \left[\lambda - \operatorname{Re}\lambda\_0 + (\lambda - \lambda\_0)(\lambda - \overline{\lambda}\_0)(A\_0 - \lambda)^{-1}\right] \gamma(\lambda\_0);$$

(vi) the identity

$$\gamma(\overline{\mu})^\*(A\_0 - \lambda)^{-1}\gamma(\nu) = \frac{M(\lambda)}{(\lambda - \nu)(\lambda - \mu)} + \frac{M(\mu)}{(\mu - \lambda)(\mu - \nu)} + \frac{M(\nu)}{(\nu - \lambda)(\nu - \mu)}$$
 
$$ holds \text{ for } \lambda, \mu, \nu \in \rho(A\_0) \text{ such that } \lambda \neq \nu, \ \lambda \neq \mu, \text{ and } \nu \neq \mu.$$

Proof. (i) Let <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0). By Lemma 2.1.7, the restriction of Γ<sup>0</sup> to <sup>N</sup> <sup>λ</sup>(S∗) is an isomorphism between <sup>N</sup> <sup>λ</sup>(S∗) and <sup>G</sup>. Hence, the inverse <sup>γ</sup>(λ) is an isomorphism between <sup>G</sup> and <sup>N</sup> <sup>λ</sup>(S∗), and since the operator Γ<sup>1</sup> : <sup>S</sup><sup>∗</sup> <sup>→</sup> <sup>G</sup> is continuous by Proposition 2.1.2 (i), it follows from Definition 2.3.4 that <sup>M</sup>(λ)=Γ1γ(λ) <sup>∈</sup> **<sup>B</sup>**(G).

(ii) It is clear from (i) and the definition of M(λ) that M(λ)Γ0f <sup>λ</sup> = Γ1f <sup>λ</sup> for every f <sup>λ</sup> <sup>∈</sup> <sup>N</sup> <sup>λ</sup>(S∗). Now <sup>γ</sup>(λ)<sup>ϕ</sup> belongs to <sup>N</sup> <sup>λ</sup>(S∗) for <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, so that

$$\left\{\Gamma\_0\widehat{\gamma}(\lambda)\varphi,\Gamma\_1\widehat{\gamma}(\lambda)\varphi\right\} = \left\{\Gamma\_0\widehat{\gamma}(\lambda)\varphi,M(\lambda)\Gamma\_0\widehat{\gamma}(\lambda)\varphi\right\} = \left\{\varphi,M(\lambda)\varphi\right\}.$$

Conversely, assume that {Γ0γ(λ)ϕ, <sup>Γ</sup>1γ(λ)ϕ} <sup>=</sup> {ϕ, M(λ)ϕ} for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> and let f <sup>λ</sup> <sup>∈</sup> <sup>N</sup> <sup>λ</sup>(S∗). Then <sup>f</sup> <sup>λ</sup> <sup>=</sup> <sup>γ</sup>(λ)<sup>ϕ</sup> for some <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> and hence

$$\left\{\Gamma\_0 \widehat{f}\_\lambda, \Gamma\_1 \widehat{f}\_\lambda\right\} = \left\{\Gamma\_0 \widehat{\gamma}(\lambda)\varphi, \Gamma\_1 \widehat{\gamma}(\lambda)\varphi\right\} = \left\{\varphi, M(\lambda)\varphi\right\}.$$

yields M(λ)Γ0f <sup>λ</sup> = Γ1f λ.

(iii) Let λ, μ ∈ ρ(A0). For given ϕ, ψ ∈ G one can choose

$$
\widehat{h}\_{\lambda} = \{h\_{\lambda}, \lambda h\_{\lambda}\} \in \widehat{\mathfrak{N}}\_{\lambda}(S^\*) \quad \text{and} \quad \widehat{k}\_{\mu} = \{k\_{\mu}, \mu k\_{\mu}\} \in \widehat{\mathfrak{N}}\_{\mu}(S^\*),
$$

such that <sup>ϕ</sup> = Γ<sup>0</sup> <sup>h</sup><sup>λ</sup> and <sup>ψ</sup> = Γ<sup>0</sup>kμ. Clearly, <sup>γ</sup>(λ)<sup>ϕ</sup> <sup>=</sup> <sup>h</sup>λ, <sup>γ</sup>(μ)<sup>ψ</sup> <sup>=</sup> <sup>k</sup>μ, and the abstract Green identity applied to <sup>h</sup><sup>λ</sup> and k<sup>μ</sup> shows that

$$\begin{split} \left( \left( M(\lambda) - M(\mu)^{\*} \right) \varphi, \psi \right) &= \left( M(\lambda) \varphi, \psi \right) - \left( \varphi, M(\mu) \psi \right) \\ &= \left( M(\lambda) \Gamma\_{0} \widehat{h}\_{\lambda}, \Gamma\_{0} \widehat{k}\_{\mu} \right) - \left( \Gamma\_{0} \widehat{h}\_{\lambda}, M(\mu) \Gamma\_{0} \widehat{k}\_{\mu} \right) \\ &= \left( \Gamma\_{1} \widehat{h}\_{\lambda}, \Gamma\_{0} \widehat{k}\_{\mu} \right) - \left( \Gamma\_{0} \widehat{h}\_{\lambda}, \Gamma\_{1} \widehat{k}\_{\mu} \right) \\ &= \left( \lambda h\_{\lambda}, k\_{\mu} \right) - \left( h\_{\lambda}, \mu k\_{\mu} \right) \\ &= \left( \lambda - \overline{\mu} \right) \left( h\_{\lambda}, k\_{\mu} \right) \\ &= \left( \left( \lambda - \overline{\mu} \right) \gamma(\lambda) \varphi, \gamma(\mu) \psi \right). \end{split}$$

Thus, one has the identity M(λ) − M(μ)<sup>∗</sup> = (λ − μ)γ(μ)∗γ(λ). Setting μ = λ it follows that M(λ) = M(λ)<sup>∗</sup> and therefore M(λ)<sup>∗</sup> = M(λ), λ ∈ ρ(A0).

(iv) The assertion (iii) gives for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>

$$\frac{(\operatorname{Im} M(\lambda)\varphi,\varphi)}{\operatorname{Im} \lambda} = (\gamma(\lambda)^\* \gamma(\lambda)\varphi,\varphi) = \|\gamma(\lambda)\varphi\|^2, \quad \varphi \in \mathfrak{G}.$$

Hence, for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> or <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> the operator Im <sup>M</sup>(λ) is nonnegative or nonpositive, respectively. As γ(λ) is an isomorphism from G onto Nλ(S∗) it follows that for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the operator Im <sup>M</sup>(λ) is boundedly invertible.

(v) Let λ<sup>0</sup> ∈ ρ(A0) be fixed. Then assertion (iii) implies

$$\operatorname{Im} M(\lambda\_0) = (\operatorname{Im} \lambda\_0) \gamma(\lambda\_0)^\* \gamma(\lambda\_0),$$

while <sup>γ</sup>(λ)=(<sup>I</sup> +(λ−λ0)(A0−λ)−1)γ(λ0), <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0), by Proposition 2.3.2. Using (iii) this leads to

$$\begin{aligned} M(\lambda) &= M(\lambda\_0)^\* + (\lambda - \overline{\lambda}\_0)\gamma(\lambda\_0)^\*\gamma(\lambda) \\ &= \operatorname{Re} M(\lambda\_0) - i \operatorname{Im} M(\lambda\_0) + (\lambda - \overline{\lambda}\_0)\gamma(\lambda\_0)^\* \left[ I + (\lambda - \lambda\_0)(A\_0 - \lambda)^{-1} \right] \gamma(\lambda\_0) \\ &= \operatorname{Re} M(\lambda\_0) + \gamma(\lambda\_0)^\* \left[ (\lambda - \operatorname{Re} \lambda\_0) + (\lambda - \lambda\_0)(\lambda - \overline{\lambda}\_0)(A\_0 - \lambda)^{-1} \right] \gamma(\lambda\_0) \end{aligned}$$

for all λ ∈ ρ(A0).

(vi) It follows from item (iii) and <sup>γ</sup>(λ)=(<sup>I</sup> + (<sup>λ</sup> <sup>−</sup> <sup>ν</sup>)(A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1)γ(ν) in Proposition 2.3.2 (ii) that

$$\begin{split} \gamma(\overline{\mu})^\*(A\_0 - \lambda)^{-1}\gamma(\nu) &= \gamma(\overline{\mu})^\* \frac{\gamma(\lambda) - \gamma(\nu)}{\lambda - \nu} \\ &= \frac{1}{\lambda - \nu} \Big( \gamma(\overline{\mu})^\*\gamma(\lambda) - \gamma(\overline{\mu})^\*\gamma(\nu) \Big) \\ &= \frac{1}{\lambda - \nu} \Big( \frac{M(\lambda) - M(\overline{\mu})^\*}{\lambda - \mu} - \frac{M(\nu) - M(\overline{\mu})^\*}{\nu - \mu} \Big), \end{split}$$

and a simple calculation using M(μ)<sup>∗</sup> = M(μ) then yields the assertion. -

In the next corollary it turns out that the Weyl function M is a uniformly strict Nevanlinna function; cf. Definition A.4.1 and Definition A.4.7.

**Corollary 2.3.7.** The Weyl function M in Definition 2.3.4 is a uniformly strict **B**(G)-valued Nevanlinna function. Its values M(λ) are maximal dissipative (maximal accumulative) operators for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−), and <sup>−</sup><sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(M(λ)) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

Proof. According to Proposition 2.3.2 (iii), the function λ → γ(λ) is holomorphic on ρ(A0). Hence, it follows from Proposition 2.3.6 (iii) with fixed μ ∈ ρ(A0) that the function λ → M(λ) is holomorphic on ρ(A0) and hence, in particular, on the possibly smaller subset <sup>C</sup> \ <sup>R</sup>. Clearly, according to Proposition 2.3.6 (iii) and (iv) one has <sup>M</sup>(λ)<sup>∗</sup> <sup>=</sup> <sup>M</sup>(λ) and (Im <sup>λ</sup>)(Im <sup>M</sup>(λ)) <sup>≥</sup> 0 for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and hence <sup>M</sup> is a **B**(G)-valued Nevanlinna function. It follows from Proposition 2.3.6 (iv) that M is uniformly strict. -

**Corollary 2.3.8.** Let M be the Weyl function in Definition 2.3.4. Then the following statements hold:


$$x \mapsto (M(x)\varphi, \varphi)$$

is nondecreasing on (a, b);

(iv) if (a, b) <sup>⊂</sup> <sup>R</sup> belongs to <sup>ρ</sup>(A0), then there exist self-adjoint relations <sup>M</sup>(a) and M(b) in G such that

$$M(b) = \lim\_{x \uparrow b} M(x) \quad \text{and} \quad M(a) = \lim\_{x \downarrow a} M(x)$$

in the strong graph sense or, equivalently, in the strong resolvent sense on <sup>C</sup> \ <sup>R</sup>.

Proof. (i) It follows from <sup>M</sup>(λ)<sup>∗</sup> <sup>=</sup> <sup>M</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0), that for <sup>x</sup> <sup>∈</sup> <sup>ρ</sup>(A0) <sup>∩</sup> <sup>R</sup> one has that M(x)<sup>∗</sup> = M(x), i.e., M(x) ∈ **B**(G) is self-adjoint.

(ii) Since M is holomorphic on ρ(A0), it is clear that the derivative M- (x) ∈ **B**(G) exists. Moreover, for all ϕ ∈ G and y = x Proposition 2.3.6 (iii) shows that

$$\begin{aligned} (M'(x)\varphi,\varphi) &= \lim\_{y \to x} \frac{(M(x)\varphi,\varphi) - (M(y)\varphi,\varphi)}{x - y} \\ &= \lim\_{y \to x} (\gamma(x)\varphi,\gamma(y)\varphi) = \|\gamma(x)\varphi\|^2 \end{aligned}$$

and hence M- (x) ≥ 0 is self-adjoint. Since γ(x) maps G isomorphically onto Nx(S∗) it also follows that 0 ∈ ρ(M- (x)).

(iii) If (a, b) <sup>⊂</sup> <sup>R</sup> belongs to <sup>ρ</sup>(A0), then for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> the mapping <sup>x</sup> → (M(x)ϕ, ϕ) is differentiable and (ii) implies that x → (M(x)ϕ, ϕ) is nondecreasing on (a, b).

(iv) For a<y<x<b it follows from (iii) that (M(y)ϕ, ϕ) ≤ (M(x)ϕ, ϕ) for all ϕ ∈ G. If γ<sup>y</sup> is a lower bound for M(y), then Corollary 1.9.10 implies that there exists a semibounded self-adjoint relation M(b) such that M(x) converges in the strong resolvent sense to <sup>M</sup>(b) on <sup>C</sup> \ [γy, <sup>∞</sup>) when <sup>x</sup> tends to <sup>b</sup>. According to Corollary 1.9.6 (i), this is equivalent to strong graph convergence of M(x) to M(b).

The same considerations as above show that (−M(y)ϕ, ϕ) ≤ (−M(x)ϕ, ϕ) for a<x<y<b and ϕ ∈ G, and hence −M(x) converges in the strong resolvent sense to a semibounded self-adjoint relation <sup>−</sup>M(a) on <sup>C</sup> \ [γy, <sup>∞</sup>) when <sup>x</sup> tends to a; here γ<sup>y</sup> is a lower bound for −M(y). This implies that M(x) tends to M(a) in the strong graph sense and in the strong resolvent sense. -

It is known that every isolated spectral point of a self-adjoint operator or relation <sup>A</sup><sup>0</sup> is an eigenvalue and a pole of first order of the resolvent <sup>λ</sup> → (A0−λ)−1. As a consequence of Proposition 2.3.6 (v), the isolated singularities of the Weyl function M are poles of first order. This is formulated in the next corollary, which can also be regarded as a simple example of the connection between the properties of the Weyl function M and the spectrum of A0. The full connection between these objects is studied in detail in Section 3.5 and Section 3.6.

**Corollary 2.3.9.** If <sup>x</sup> <sup>∈</sup> <sup>R</sup> is an isolated singularity of <sup>M</sup> and <sup>B</sup><sup>x</sup> is a disc centered at x such that M is holomorphic in B<sup>x</sup> \ {x}, then M admits a norm convergent Laurent series expansion of the form

$$M(\lambda) = \frac{M\_{-1}}{\lambda - x} + \sum\_{k=0}^{\infty} M\_k (\lambda - x)^k, \quad M\_{-1}, M\_0, M\_1, \dots \in \mathbf{B}(\mathcal{G}).$$

In particular,

$$\lim\_{\lambda \to x} (\lambda - x)M(\lambda) = M\_{-1} = \frac{1}{2\pi i} \int\_{\mathcal{C}} M(\lambda) \, d\lambda,$$

where C denotes the boundary of Bx.

In the next remark it is explained that a self-adjoint part of a symmetric relation has, roughly speaking, no influence on the corresponding boundary triplet, γ-field, and Weyl function.

**Remark 2.3.10.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let A<sup>0</sup> = ker Γ0. Assume that H admits an orthogonal decomposition H = H- ⊕ H-and that S has the orthogonal decomposition

$$S = S' \stackrel{\cdot}{\oplus} A,\tag{2.3.5}$$

where S is a closed symmetric relation in H and A is a self-adjoint relation in H--. Then it follows from (2.3.5) and Proposition 1.3.13 that

$$S^\* = (S')^\* \oplus A,\tag{2.3.6}$$

where (S- )<sup>∗</sup> stands for the adjoint of S in the space H- . Observe that according to (2.3.6) every element {f,f- } ∈ S<sup>∗</sup> has the decomposition

$$\{f, f'\} = \{h, h'\} + \{k, k'\}, \quad \{h, h'\} \in (S')^\*, \ \{k, k'\} \in A. \tag{2.3.7}$$

Since A ⊂ S, (2.3.7) shows that

$$
\Gamma\_0\{f, f'\} = \Gamma\_0\{h, h'\} \quad \text{and} \quad \Gamma\_1\{f, f'\} = \Gamma\_1\{h, h'\}.
$$

Hence, if Γ- <sup>0</sup> and Γ- <sup>1</sup> denote the restrictions of Γ<sup>0</sup> and Γ<sup>1</sup> to (S- )∗, then it is easily seen that {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} is a boundary triplet for (S- )∗ such that

$$A\_0' = \ker \Gamma\_0', \quad A\_0 = \ker \Gamma\_0 = A\_0' \oplus A.$$

Moreover, note that (2.3.6) shows

$$
\mathfrak{N}\_{\lambda}(S^\*) = \mathfrak{N}\_{\lambda}((S^{'})^\*), \quad \lambda \in \rho(A\_0) = \rho(A\_0^{'}) \cap \rho(A),
$$

which implies that the Weyl function M and the γ-field γsatisfy

$$M'(\lambda) = M(\lambda), \quad \gamma'(\lambda) = \gamma(\lambda), \quad \lambda \in \rho(A\_0).$$

For completeness observe that if H is a closed intermediate extension of S and H- = H ∩ (S- )∗, then S- ⊂ H- ⊂ (S- )<sup>∗</sup> and H = H- <sup>⊕</sup> <sup>A</sup>. Hence, one may discard the self-adjoint part A in H-without disturbing the boundary triplet structure.

## **2.4 Existence and construction of boundary triplets**

Here the existence and possible constructions of boundary triplets based on decompositions in Section 1.7 are addressed. Recall first from Corollary 1.7.13 that a closed symmetric relation S in a Hilbert space H admits self-adjoint extensions in H if and only if the defect numbers

$$\dim \bar{\mathfrak{M}}\_{\lambda}(S^\*) \quad \text{and} \quad \dim \bar{\mathfrak{M}}\_{\mu}(S^\*) \tag{2.4.1}$$

of <sup>S</sup> are equal for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and some, and hence for all <sup>μ</sup> <sup>∈</sup> <sup>C</sup>−. Since any boundary triplet for <sup>S</sup><sup>∗</sup> induces two self-adjoint extensions <sup>A</sup><sup>0</sup> and A<sup>1</sup> of S it is clear that a boundary triplet can only exist if the defect numbers are equal. It turns out that this condition is also sufficient.

The following main result makes explicit how to construct a boundary triplet in terms of a given self-adjoint extension of a closed symmetric relation S (which exists if and only if the defect numbers in (2.4.1) coincide). The following notation will be used. For <sup>μ</sup> <sup>∈</sup> <sup>C</sup> the natural embedding of <sup>N</sup>μ(S∗) into <sup>H</sup> is denoted by ιNμ(S∗) and its adjoint is the orthogonal projection PNμ(S∗) from H onto Nμ(S∗). **Theorem 2.4.1.** Let S be a closed symmetric relation in H and assume that H is a self-adjoint extension of S in H. Fix μ ∈ ρ(H) and decompose S<sup>∗</sup> as

$$S^\* = H \stackrel\frown{\rightarrow} \dot{\mathfrak{N}}\_{\mu}(S^\*), \quad direct\ sum. \tag{2.4.2}$$

Let f <sup>=</sup> {f,f- } ∈ S<sup>∗</sup> have the corresponding decomposition

$$
\widehat{f} = \widehat{f}\_0 + \widehat{f}\_\mu,\tag{2.4.3}
$$

with f <sup>0</sup> = {f0, f- <sup>0</sup>} ∈ H and f <sup>μ</sup> <sup>=</sup> {fμ, μfμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗). Then

$$
\Gamma\_0 \dot{f} := f\_\mu \quad \text{and} \quad \Gamma\_1 \dot{f} := P\_{\mathfrak{R}\_\mu(S^\*)} (f\_0' - \overline{\mu} f\_0 + \mu f\_\mu) \tag{2.4.4}
$$

define a boundary triplet {Nμ(S∗), Γ0, Γ1} for S<sup>∗</sup> such that H = ker Γ0. Moreover, for λ ∈ ρ(H) the corresponding γ-field γ is given by

$$\gamma(\lambda) = \left( I + (\lambda - \mu)(H - \lambda)^{-1} \right) \iota\_{\mathfrak{N}\_{\mu}(S^\*)} \tag{2.4.5}$$

and the corresponding Weyl function M is given by

$$M(\lambda) = \lambda + (\lambda - \mu)(\lambda - \overline{\mu})P\_{\mathfrak{N}\_{\mu}(S^\*)}(H - \lambda)^{-1}\iota\_{\mathfrak{N}\_{\mu}(S^\*)}.\tag{2.4.6}$$

Proof. Since H is a self-adjoint extension of S, the direct sum decomposition (2.4.2) with μ ∈ ρ(H) follows from Corollary 1.7.5. Hence, for every f <sup>∈</sup> <sup>S</sup><sup>∗</sup> there is a unique decomposition as in (2.4.3). Let <sup>g</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> have a corresponding decomposition

$$
\widehat{g} = \widehat{g}\_0 + \widehat{g}\_{\mu},
\tag{2.4.7}
$$

where <sup>g</sup><sup>0</sup> <sup>=</sup> {g0, g- <sup>0</sup>} ∈ <sup>H</sup> and <sup>g</sup><sup>μ</sup> <sup>=</sup> {gμ, μgμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗). Then it follows directly from the decompositions (2.4.3), (2.4.7), and (f- 0, g0)=(f0, g- <sup>0</sup>) that

$$\begin{split} \left( (f', g) - (f, g') = \left( f\_0' + \mu f\_{\mu}, g\_0 + g\_{\mu} \right) - \left( f\_0 + f\_{\mu}, g\_0' + \mu g\_{\mu} \right) \\ = \left( f\_0' + \mu f\_{\mu}, g\_{\mu} \right) + \left( \mu f\_{\mu}, g\_0 \right) - \left( f\_{\mu}, g\_0' + \mu g\_{\mu} \right) - \left( f\_0, \mu g\_{\mu} \right) \\ = \left( f\_0' - \overline{\mu} f\_0 + \mu f\_{\mu}, g\_{\mu} \right) - \left( f\_{\mu}, g\_0' - \overline{\mu} g\_0 + \mu g\_{\mu} \right). \end{split} \tag{2.4.8}$$

Moreover, it follows from the definition (2.4.4) applied to f and <sup>g</sup> that

$$\begin{split} \left(\Gamma\_{1}\widehat{f}, \Gamma\_{0}\widehat{g}\right)\_{\mathfrak{N}\_{\mu}(S^{\*})} &- \left(\Gamma\_{0}\widehat{f}, \Gamma\_{1}\widehat{g}\right)\_{\mathfrak{N}\_{\mu}(S^{\*})} \\ &= \left(P\_{\mathfrak{N}\_{\mu}(S^{\*})}(f\_{0}^{\prime} - \overline{\mu}f\_{0} + \mu f\_{\mu}), g\_{\mu}\right)\_{\mathfrak{N}\_{\mu}(S^{\*})} \\ &- \left(f\_{\mu}, P\_{\mathfrak{N}\_{\mu}(S^{\*})}(g\_{0}^{\prime} - \overline{\mu}g\_{0} + \mu g\_{\mu})\right)\_{\mathfrak{N}\_{\mu}(S^{\*})} \\ &= \left(f\_{0}^{\prime} - \overline{\mu}f\_{0} + \mu f\_{\mu}, g\_{\mu}\right) - \left(f\_{\mu}, g\_{0}^{\prime} - \overline{\mu}g\_{0} + \mu g\_{\mu}\right). \end{split} \tag{2.4.9}$$

A combination of (2.4.8) and (2.4.9) shows that the abstract Green identity (2.1.1) holds.

It will now be verified that Γ = (Γ0, Γ1) : S<sup>∗</sup> → Nμ(S∗)×Nμ(S∗) is surjective. To see this, let ϕ, ϕ- ∈ Nμ(S∗). Since μ ∈ ρ(H), one can choose {f0, f- <sup>0</sup>} ∈ H such that

$$f\_0' - \overline{\mu} f\_0 = \varphi' - \mu \varphi. \tag{2.4.10}$$

It is clear from (2.4.2) that

$$\bar{f} := \{f\_0, f\_0'\} + \{\varphi, \mu\varphi\} \in S^\*,$$

and therefore (2.4.4) shows that

$$
\Gamma\_0 \widehat{f} = \varphi, \quad \Gamma\_1 \widehat{f} = P\_{\mathfrak{N}\_\mu(S^\*)} (f\_0' - \overline{\mu} f\_0 + \mu \varphi) = \varphi',
$$

where (2.4.10) was used in the last equality. Hence, ran Γ = Nμ(S∗)× Nμ(S∗) and thus {Nμ(S∗), Γ0, Γ1} is a boundary triplet for S∗. It follows from the definition of Γ<sup>0</sup> and the decomposition (2.4.2) that H = ker Γ0.

Now (2.4.5) and (2.4.6) will be verified. Let f <sup>μ</sup> <sup>=</sup> {fμ, μfμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗). Then (2.4.4) gives

$$
\Gamma\_0 \ddot{f}\_\mu = f\_\mu \quad \text{and} \quad \Gamma\_1 \ddot{f}\_\mu = \mu f\_\mu.
$$

Therefore, Definition 2.3.1 leads to

$$\gamma(\mu) = \left\{ \{ \Gamma\_0 \widehat{f}\_\mu, f\_\mu \} \, : \, \widehat{f}\_\mu \in \widehat{\mathfrak{N}}\_\mu(S^\*) \right\} = \left\{ \{ f\_\mu, f\_\mu \} \, : \, \widehat{f}\_\mu \in \widehat{\mathfrak{N}}\_\mu(S^\*) \right\}.$$

or, equivalently, γ(μ) : Nμ(S∗) → H acts as f<sup>μ</sup> → fμ. Thus, γ(μ) is the canonical embedding of Nμ(S∗) into H,

$$\gamma(\mu) = \iota\_{\mathfrak{N}\_{\mu}(S^\*)},\tag{2.4.11}$$

and γ(μ)<sup>∗</sup> : H → Nμ(S∗) is the orthogonal projection onto Nμ(S∗), that is, γ(μ)<sup>∗</sup> = PNμ(S∗). Proposition 2.3.2 (ii) and (2.4.11) imply that the γ-field of {Nμ(S∗), Γ0, Γ1} is of the required form. Furthermore, Definition 2.3.4 leads to

$$M(\mu) = \left\{ \{ \Gamma\_0 \widehat{f}\_\mu, \Gamma\_1 \widehat{f}\_\mu \} \, : \, \widehat{f}\_\mu \in \widehat{\mathfrak{N}}\_\mu(S^\*) \right\} = \left\{ \{ f\_\mu, \mu f\_\mu \} \, : \, \widehat{f}\_\mu \in \widehat{\mathfrak{N}}\_\mu(S^\*) \right\}.$$

or, equivalently,

$$M(\mu) = \mu.$$

Hence, Proposition 2.3.6 (v) with λ<sup>0</sup> = μ gives the desired result. -

It is interesting to see what Theorem 2.4.1 means in the simple case when the underlying symmetric relation is trivial; cf. Example 2.3.5, which is opposite in the sense that there ker Γ<sup>0</sup> = H× {0} and M(λ) = −(1/λ)I. Again this example is not typical, since in standard applications G = H; cf. Chapters 6, 7, and 8.

**Example 2.4.2.** Let S = {0, 0} be the trivial symmetric relation in H. Note that

$$H = \{0\} \times \mathfrak{H}$$

is a self-adjoint extension of S with 0 ∈ ρ(H). It is clear that S<sup>∗</sup> = H×H and that <sup>N</sup> <sup>0</sup>(S∗) = <sup>H</sup> × {0}. Therefore, one has the direct sum decomposition

$$S^\* = H \stackrel{\frown}{\rightarrow} \dot{\mathfrak{N}}\_0(S^\*),$$

and any f <sup>∈</sup> <sup>S</sup><sup>∗</sup> has the corresponding decomposition

$$\dot{f} = \{f, f'\} = \{0, f'\} + \{f, 0\}, \quad \{0, f'\} \in H, \quad \{f, 0\} \in \dot{\mathfrak{N}}\_0(S^\*).$$

According to (2.4.4), one sees that

$$
\Gamma\_0 \widehat{f} = f \quad \text{and} \quad \Gamma\_1 \widehat{f} = f', \quad \widehat{f} = \{f, f'\} \in S^\*,
$$

defines a boundary triplet {H, <sup>Γ</sup>0, <sup>Γ</sup>1} for <sup>S</sup><sup>∗</sup> with ker Γ<sup>0</sup> <sup>=</sup> <sup>H</sup>. Note that <sup>ρ</sup>(H) = <sup>C</sup> and that for every <sup>λ</sup> <sup>∈</sup> <sup>C</sup> the resolvent (<sup>H</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is the zero operator. Hence, the γ-field is given by γ(λ) = I and the Weyl function is given by M(λ) = λI.

There is an addendum to Theorem 2.4.1 when the decomposition (2.4.2) is replaced by a decomposition involving

$$\mathfrak{N}\_{\infty}(S^\*) = \left\{ \{0, f'\} : f' \in \operatorname{mult} S^\* \right\}.$$

In fact, the following result may be seen as a limit result obtained from (2.4.2) with μ → ∞. The embedding of N∞(S∗) = mul S<sup>∗</sup> into H is denoted by ιN∞(S∗) and its adjoint is the orthogonal projection PN∞(S∗) from H onto N∞(S∗). The proof of Proposition 2.4.3 is straightforward. Observe that Example 2.3.5 is an illustration of the following proposition.

**Proposition 2.4.3.** Let S be a closed symmetric operator in H and assume that H is a self-adjoint extension of S which belongs to **B**(H). Then S<sup>∗</sup> can be decomposed as

$$S^\* = H \stackrel\frown{\rightarrow} \dot{\mathfrak{N}}\_{\infty}(S^\*), \quad direct\ sum. \tag{2.4.12}$$

Let f <sup>=</sup> {f,f- } ∈ S<sup>∗</sup> have the corresponding decomposition

> f <sup>=</sup> <sup>f</sup> <sup>0</sup> + f ∞,

with f <sup>0</sup> = {f0,Hf0} ∈ H and f <sup>∞</sup> <sup>=</sup> {0, f∞} ∈ <sup>N</sup> <sup>∞</sup>(S∗). Then

$$
\Gamma\_0 \widehat{f} := f\_{\infty} \quad \text{and} \quad \Gamma\_1 \widehat{f} := -P\_{\mathfrak{N}\_{\infty}(\mathbb{S}^\*)} f\_0 \tag{2.4.13}
$$

define a boundary triplet {N∞(S∗), Γ0, Γ1} for S<sup>∗</sup> such that H = ker Γ0. Moreover, for λ ∈ ρ(H) the corresponding γ-field γ is given by

$$\gamma(\lambda) = -(H - \lambda)^{-1} \iota\_{\mathfrak{R}\_{\infty}(S^\*)},$$

and the corresponding Weyl function M is given by

$$M(\lambda) = P\_{\mathfrak{N}\_{\infty}(S^\*)} (H - \lambda)^{-1} \iota\_{\mathfrak{N}\_{\infty}(S^\*)} \cdot$$

**Remark 2.4.4.** Let S be a closed symmetric operator in H and let H ∈ **B**(H) be a self-adjoint extension of S as in Proposition 2.4.3. Then S is bounded and hence dom S is closed. Decompose H = dom S ⊕ N∞(S∗) and note that

$$S = \begin{pmatrix} S\_{11} \\ S\_{21} \end{pmatrix} : \operatorname{dom} S \to \begin{pmatrix} \operatorname{dom} S \\ \mathfrak{N}\_{\infty}(S^\*) \end{pmatrix},$$

and in a similar way

$$H = \begin{pmatrix} H\_{11} & H\_{21}^\* \\ H\_{21} & H\_{22} \end{pmatrix} : \begin{pmatrix} \operatorname{dom} S \\ \mathfrak{N}\_{\infty}(S^\*) \end{pmatrix} \to \begin{pmatrix} \operatorname{dom} S \\ \mathfrak{N}\_{\infty}(S^\*) \end{pmatrix} \cdot$$

It follows that H<sup>11</sup> = S11, H<sup>21</sup> = S21, and H<sup>∗</sup> <sup>21</sup> = S<sup>∗</sup> 21. Relative to the decomposition (2.4.12) of S∗, the boundary triplet in (2.4.13) can be written as

$$
\Gamma\_0 \widehat{f} = f\_{\infty} \quad \text{and} \quad \Gamma\_1 \widehat{f} = -f\_2, \quad \text{where } \widehat{f} = \left\{ \begin{pmatrix} f\_1 \\ f\_2 \end{pmatrix}, \begin{pmatrix} f\_1' \\ f\_2' \end{pmatrix} \right\} \in S^\*.
$$

Let Θ be a closed relation in G = N∞(S∗). Then the corresponding extension A<sup>Θ</sup> of S is given by

$$A\_{\Theta} = \left\{ \left\{ \begin{pmatrix} f\_1 \\ f\_2 \end{pmatrix}, \begin{pmatrix} S\_{11}f\_1 + S\_{21}^\*f\_2 \\ S\_{21}f\_1 + H\_{22}f\_2 + f\_{\infty} \end{pmatrix} \right\} : \{f\_{\infty}, -f\_2\} \in \Theta \right\}, \phi$$

which can formally be written as

$$A\_{\Theta} = \begin{pmatrix} S\_{11} & S\_{21}^\* \\ S\_{21} & H\_{22} - \Theta^{-1} \end{pmatrix}.$$

Therefore, the extensions of S may be interpreted as solutions of the completion problem posed by the incomplete 2 × 2 operator matrix

$$
\begin{pmatrix} S\_{11} & S\_{21}^\* \\ S\_{21} & \* \end{pmatrix} \cdot
$$

Theorem 2.4.1 has some variations when the self-adjoint extension H in (2.4.2) is further decomposed; cf. Section 1.7. The most straightforward results are presented in the following corollaries. In the next result the direct sum decomposition from Corollary 1.7.10 (with μ = λ) is used.

**Corollary 2.4.5.** Let S be a closed symmetric relation in H, assume that H is a self-adjoint extension of <sup>S</sup> in <sup>H</sup>, and fix <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then

$$S^\* = S \cdot \widehat{+} \left\{ \{ (H - \overline{\mu})^{-1} k, (I + \overline{\mu}(H - \overline{\mu})^{-1})k \} : k \in \mathfrak{N}\_{\mu}(S^\*) \right\} \cdot \widehat{+} \widehat{\mathfrak{N}}\_{\mu}(S^\*),$$

where the sums are direct. Let f <sup>=</sup> {f,f- } ∈ S<sup>∗</sup> have the corresponding decomposition

$$\{f, f'\} = \{h, h'\} + \left\{(H - \overline{\mu})^{-1}k, \left(I + \overline{\mu}(H - \overline{\mu})^{-1}\right)k\right\} + \{f\_{\mu}, \mu f\_{\mu}\},\quad(2.4.14)$$

where h = {h, h- } ∈ S, k ∈ Nμ(S∗), and f<sup>μ</sup> ∈ Nμ(S∗). Then

$$
\Gamma\_0 \widehat{f} := f\_\mu \quad \text{and} \quad \Gamma\_1 \widehat{f} := k + \mu f\_\mu \tag{2.4.15}
$$

define a boundary triplet {Nμ(S∗), Γ0, Γ1} for S<sup>∗</sup> such that H = ker Γ0. The corresponding γ-field and Weyl function are given by (2.4.5) and (2.4.6).

Proof. It follows from Corollary 1.7.10 that every f <sup>=</sup> {f,f- } ∈ S<sup>∗</sup> can be written as

$$\{f, f'\} = \{f\_0, f'\_0\} + \{f\_\mu, \mu f\_\mu\},$$

where {f0, f- <sup>0</sup>} ∈ <sup>H</sup>, {fμ, μfμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗), and

$$\{f\_0, f\_0'\} = \{h, h'\} + \left\{(H - \overline{\mu})^{-1}k, \left(I + \overline{\mu}(H - \overline{\mu})^{-1}\right)k\right\},$$

with {h, h- } ∈ S and k ∈ Nμ(S∗). The boundary mappings in Theorem 2.4.1 then have the form Γ0f <sup>=</sup> <sup>f</sup><sup>μ</sup> and

$$\begin{aligned} \Gamma\_1 f &= P\_{\mathfrak{R}\_\mu(S^\*)} (f\_0' - \overline{\mu} f\_0 + \mu f\_\mu) \\ &= P\_{\mathfrak{R}\_\mu(S^\*)} \left( h' - \overline{\mu} h + k + \mu f\_\mu \right) \\ &= k + \mu f\_\mu, \end{aligned}$$

where h- − μh ∈ ran (S − μ) = Nμ(S∗)<sup>⊥</sup> and k + μf<sup>μ</sup> ∈ Nμ(S∗) was used in the last step. This shows that the mappings in (2.4.15) form a boundary triplet with the same γ-field and Weyl function as in Theorem 2.4.1. -

In Theorem 2.4.1, Proposition 2.4.3, and Corollary 2.4.5 the boundary triplets were based on decompositions of S<sup>∗</sup> in Section 1.7. The following result gives a boundary triplet for a decomposition of S<sup>∗</sup> which is a mixture of the above decompositions.

**Corollary 2.4.6.** Let S be a closed symmetric relation in H, assume that H is a self-adjoint extension of <sup>S</sup> in <sup>H</sup>, and fix <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Every <sup>f</sup> <sup>=</sup> {f,f- } ∈ S<sup>∗</sup> has the unique decomposition

$$\begin{aligned} \{f, f'\} &= \{h, h'\} + \{ \left(I + \overline{\mu}(H - \overline{\mu})^{-1} \right) \psi, \left(\mu + \overline{\mu} + \overline{\mu}^2 (H - \overline{\mu})^{-1} \right) \psi\} \\ &\quad + \{ (H - \overline{\mu})^{-1} \varphi, \left(I + \overline{\mu}(H - \overline{\mu})^{-1} \right) \varphi \}, \end{aligned}$$

with h = {h, h- } ∈ S and ψ, ϕ ∈ Nμ(S∗). Then

$$
\Gamma\_0 \widehat{f} = \psi \quad \text{and} \quad \Gamma\_1 \widehat{f} = \varphi + 2(\text{Re}\,\mu)\psi \tag{2.4.16}
$$

define a boundary triplet {Nμ(S∗), Γ0, Γ1} for S<sup>∗</sup> such that H = ker Γ0. The corresponding γ-field and Weyl function are given by (2.4.5) and (2.4.6).

Proof. Let f <sup>=</sup> {f,f- } ∈ S∗. Then according to (2.4.14) there is the decomposition

$$\{f, f'\} = \{h, h'\} + \left\{(H - \overline{\mu})^{-1}k, \left(I + \overline{\mu}(H - \overline{\mu})^{-1}\right)k\right\} + \{\psi, \mu\psi\},$$

where h = {h, h- } ∈ S, k ∈ Nμ(S∗), and ψ ∈ Nμ(S∗) are uniquely determined. Define the element ϕ by k = μψ + ϕ, so that ϕ ∈ Nμ(S∗) and the right-hand side of the above decomposition can be rewritten as

$$\{h, h'\} + \left\{ (H - \overline{\mu})^{-1} (\overline{\mu}\psi + \varphi), \left( I + \overline{\mu}(H - \overline{\mu})^{-1} \right) (\overline{\mu}\psi + \varphi) \right\} + \{\psi, \mu\psi\}.$$

This yields the decomposition for {f,f- } in the statement. The boundary triplet in (2.4.15) now reads as (2.4.16); the corresponding γ-field and Weyl function are given by (2.4.5) and (2.4.6). -

By von Neumann's second formula (see Theorem 1.7.12) one can describe the self-adjoint extension H in Theorem 2.4.1 by means of an isometric operator from Nμ(S∗) onto Nμ(S∗). This observation also gives rise to the construction of a boundary triplet, where the parameter space is given by Nμ(S∗).

**Theorem 2.4.7.** Let S be a closed symmetric relation in H, let H be a self-adjoint extension of <sup>S</sup>, and fix some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Let <sup>W</sup> be the isometric mapping from Nμ(S∗) onto Nμ(S∗) such that

$$H = S \stackrel{\frown}{+} (I - \widehat{W}) \widehat{\mathfrak{N}}\_{\mathbb{Z}}(S^\*) = S \stackrel{\frown}{+} \left\{ f\_{\mathbb{Z}} - Wf\_{\mathbb{Z}}, \overline{\mu}f\_{\mathbb{Z}} - \mu Wf\_{\mathbb{Z}} \right\} \tag{2.4.17}$$

and decompose f <sup>=</sup> {f,f- } ∈ S<sup>∗</sup> according to von Neumann's first formula:

$$\widehat{f} = \{h, h'\} + \{f\_{\mu}, \mu f\_{\mu}\} + \{f\_{\overline{\mu}}, \overline{\mu} f\_{\overline{\mu}}\},\tag{2.4.18}$$

where h = {h, h- } ∈ S, f <sup>μ</sup> <sup>=</sup> {fμ, μfμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗), and <sup>f</sup> <sup>μ</sup> <sup>=</sup> {fμ, μfμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗). Then

$$
\Gamma\_0 \dot{f} = f\_\mu + Wf\_\overline{\mu} \quad \text{and} \quad \Gamma\_1 \dot{f} = \mu f\_\mu + \overline{\mu} Wf\_\overline{\mu} \tag{2.4.19}
$$

define a boundary triplet {Nμ(S∗), Γ0, Γ1} for S<sup>∗</sup> such that H = ker Γ0. The corresponding γ-field and the Weyl function are given by (2.4.5) and (2.4.6).

Proof. Let f <sup>=</sup> {f,f- } ∈ <sup>S</sup><sup>∗</sup> be decomposed as in (2.4.18) and let <sup>g</sup> <sup>=</sup> {g, g- } ∈ S<sup>∗</sup> be decomposed in the analogous form

$$
\widehat{g} = \{k, k'\} + \{g\_{\mu}, \mu g\_{\mu}\} + \{g\_{\overline{\mu}}, \overline{\mu} g\_{\overline{\mu}}\},
$$

where <sup>k</sup> <sup>=</sup> {k, k- } ∈ <sup>S</sup>, <sup>g</sup><sup>μ</sup> <sup>=</sup> {gμ, μgμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗), and <sup>g</sup><sup>μ</sup> <sup>=</sup> {gμ, μgμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗). Since {h, h- }, {k, k- } ∈ S, one has (h- , k)=(h, k- ),

$$\begin{aligned} (h', g\_{\mu} + g\_{\overline{\mu}}) - (h, \mu g\_{\mu} + \overline{\mu} g\_{\overline{\mu}}) &= 0, \\ (\mu f\_{\mu} + \overline{\mu} f\_{\overline{\mu}}, k) - (f\_{\mu} + f\_{\overline{\mu}}, k') &= 0, \end{aligned}$$

and therefore

$$\begin{split} (f',g) - (f,g') &= \left(h' + \mu f\_{\mu} + \overline{\mu}f\_{\overline{\mu}}, k + g\_{\mu} + g\_{\overline{\mu}}\right) - \left(h + f\_{\mu} + f\_{\overline{\mu}}, k' + \mu g\_{\mu} + \overline{\mu}g\_{\overline{\mu}}\right) \\ &= \left(\mu f\_{\mu} + \overline{\mu}f\_{\overline{\mu}}, g\_{\mu} + g\_{\overline{\mu}}\right) - \left(f\_{\mu} + f\_{\overline{\mu}}, \mu g\_{\mu} + \overline{\mu}g\_{\overline{\mu}}\right) \\ &= (\mu - \overline{\mu})(f\_{\mu}, g\_{\mu}) + (\overline{\mu} - \mu)(f\_{\overline{\mu}}, g\_{\overline{\mu}}) \\ &= (\mu - \overline{\mu})(f\_{\mu}, g\_{\mu})\mathfrak{N}\_{\mu}(S^{\*}) + (\overline{\mu} - \mu)(Wf\_{\overline{\mu}}, Wg\_{\overline{\mu}})\mathfrak{N}\_{\mu}(S^{\*}). \end{split}$$

On the other hand it follows from (2.4.19) that

$$\begin{split} & (\Gamma\_1 \widehat{f}, \Gamma\_0 \widehat{g})\_{\mathfrak{R}\_\mu(S^\*)} - (\Gamma\_0 \widehat{f}, \Gamma\_1 \widehat{g})\_{\mathfrak{R}\_\mu(S^\*)} \\ &= \left( \mu f\_\mu + \overline{\mu} W f\_\mathfrak{\pi}, g\_\mu + W g\_\pi \right)\_{\mathfrak{R}\_\mu(S^\*)} - \left( f\_\mu + W f\_\mathfrak{\pi}, \mu g\_\mu + \overline{\mu} W g\_\pi \right)\_{\mathfrak{R}\_\mu(S^\*)}, \\ &= (\mu - \overline{\mu}) (f\_\mu, g\_\mu)\_{\mathfrak{R}\_\mu(S^\*)} + (\overline{\mu} - \mu) (W f\_\mathfrak{\pi}, W g \pi)\_{\mathfrak{R}\_\mu(S^\*)}, \end{split}$$

i.e., the abstract Green identity (2.1.1) holds.

In order to see that Γ = (Γ0, Γ1) : S<sup>∗</sup> → Nμ(S∗) × Nμ(S∗) is surjective consider ϕ, ψ ∈ Nμ(S∗) and define f <sup>=</sup> {fμ, μfμ} <sup>+</sup> {fμ, μfμ} ∈ <sup>S</sup><sup>∗</sup> by

$$\widehat{f} = \frac{1}{\overline{\mu} - \mu} \left\{ \overline{\mu}\varphi - \psi, \mu(\overline{\mu}\varphi - \psi) \right\} + \frac{1}{\mu - \overline{\mu}} \left\{ W^\*(\mu\varphi - \psi), \overline{\mu}W^\*(\mu\varphi - \psi) \right\}.$$

Then

$$
\Gamma\_0 \widehat{f} = f\_\mu + Wf\_{\overline{\mu}} = \varphi \quad \text{and} \quad \Gamma\_1 \widehat{f} = \mu f\_\mu + \overline{\mu} Wf\_{\overline{\mu}} = \psi,
$$

and therefore Γ = (Γ0, Γ1) maps onto Nμ(S∗) × Nμ(S∗). This implies that {Nμ(S∗), Γ0, Γ1} is a boundary triplet for S∗. Note that f in (2.4.18) is in ker Γ<sup>0</sup> if and only if f<sup>μ</sup> = −W f<sup>μ</sup> and from (2.4.17) it then follows that H = ker Γ0.

Finally, to describe the γ-field and the Weyl function, consider the decomposition

$$S^\* = H \widehat{+} \dot{\mathfrak{N}}\_{\mu}(S^\*) = \ker \Gamma\_0 \widehat{+} \dot{\mathfrak{N}}\_{\mu}(S^\*),$$

and note that if f in (2.4.18) belongs to <sup>N</sup> <sup>μ</sup>(S∗), then <sup>f</sup> <sup>=</sup> <sup>f</sup> <sup>μ</sup> = {fμ, μfμ}. Hence, (2.4.19) gives

$$
\Gamma\_0 \widehat{f}\_\mu = f\_\mu \quad \text{and} \quad \Gamma\_1 \widehat{f}\_\mu = \mu f\_\mu.
$$

In the same way as in the proof of Theorem 2.4.1 one concludes that γ(μ) = ιNμ(S∗) and M(μ) = μ. Now Proposition 2.3.2 (ii) yields (2.4.5) and Proposition 2.3.6 (v) implies (2.4.6). -

Note that the strategy in the proof of Theorem 2.4.7 is different from the strategy in the two previous results. The connection will be sketched now. In Theorem 2.4.7 the isometric mapping W from Nμ(S∗) onto Nμ(S∗) determines the boundary triplet {Nμ(S∗), Γ0, Γ1} for S<sup>∗</sup> in (2.4.19). The self-adjoint extension H of S determined by W in (2.4.17) then satisfies H = ker Γ0. Now apply Theorem 2.4.1 with this particular self-adjoint extension. Hence, f <sup>∈</sup> <sup>S</sup><sup>∗</sup> in Theorem 2.4.1 is decomposed in the form

$$\dot{f} = \{f\_0, f\_0'\} + \{\varphi\_\mu, \mu \varphi\_\mu\},\tag{2.4.20}$$

where {f0, f- <sup>0</sup>} ∈ <sup>H</sup> and {ϕμ, μϕμ} ∈ <sup>N</sup> <sup>μ</sup>(S∗). Making use of the decomposition (2.4.17) of H it follows that

$$\{f\_0, f\_0'\} = \{h, h'\} + \{-W\psi\_{\overline{\mu}}, -\mu W\psi\_{\overline{\mu}}\} + \{\psi\_{\overline{\mu}}, \overline{\mu}\psi\_{\overline{\mu}}\} \tag{2.4.21}$$

with {h, h- } ∈ S and ψ<sup>μ</sup> ∈ Nμ(S∗). Therefore, f in (2.4.20) is given by

$$
\widehat{f} = \{h, h'\} + \{f\_{\mu}, \mu f\_{\mu}\} + \{f\_{\overline{\mu}}, \overline{\mu} f\_{\overline{\mu}}\},
$$

where {fμ, μfμ} = {ϕ<sup>μ</sup> − W ψμ, μϕ<sup>μ</sup> − μW ψμ} and {fμ, μfμ} = {ψμ, μψμ}. Now the identity

$$
\varphi\_{\mu} = f\_{\mu} + W\psi\_{\overline{\mu}} = f\_{\mu} + Wf\_{\overline{\mu}},
$$

shows that the boundary maps Γ<sup>0</sup> in Theorem 2.4.1 and Theorem 2.4.7 coincide. Moreover, as PNμ(S∗)(h-− μh) = 0, the identity

$$P\_{\mathfrak{R}\_{\mu}(S^\*)} \left( f\_0' - \overline{\mu} f\_0 + \mu \varphi\_{\mu} \right) = -\mu W \psi\_{\overline{\mu}} + \overline{\mu} W \psi\_{\overline{\mu}} + \mu \varphi\_{\mu} = \mu f\_{\mu} + \overline{\mu} W f\_{\overline{\mu}}$$

follows from (2.4.21), and shows that the boundary maps Γ<sup>1</sup> in Theorem 2.4.1 and Theorem 2.4.7 are the same.

## **2.5 Transformations of boundary triplets**

Let S be a closed symmetric relation in H with equal defect numbers. Then S admits self-adjoint extensions in H and each self-adjoint extension gives rise to a boundary triplet as in Theorem 2.4.1. Hence, boundary triplets for S<sup>∗</sup> are not uniquely determined, with the exception of the trivial case S = S∗. A complete description of all boundary triplets for S<sup>∗</sup> will be given with the help of block operator matrices that are unitary with respect to the indefinite inner products in Section 1.8; cf. (2.1.3). The transformation properties of the corresponding boundary parameters, γ-fields, and Weyl functions are discussed afterwards.

The main result on the description of all boundary triplets for S<sup>∗</sup> is the following theorem. It describes the transformation of boundary triplets.

**Theorem 2.5.1.** Let S be a closed symmetric relation in H, assume that {G, Γ0, Γ1} is a boundary triplet for S∗, and let G be a Hilbert space. Then the following statements hold:

(i) Let W ∈ **B**(G × G, G- × G- ) satisfy

$$\mathcal{W}^\* \mathcal{J}\_{\mathbb{S}'} \mathcal{W} = \mathcal{J}\_{\mathbb{S}} \quad \text{and} \quad \mathcal{W} \mathcal{J}\_{\mathbb{S}} \mathcal{W}^\* = \mathcal{J}\_{\mathbb{S}'}, \tag{2.5.1}$$

#### 2.5. Transformations of boundary triplets 135

and define

$$
\begin{pmatrix} \Gamma\_0'\\\Gamma\_1' \end{pmatrix} = \mathcal{W} \begin{pmatrix} \Gamma\_0\\\Gamma\_1 \end{pmatrix} = \begin{pmatrix} W\_{11} & W\_{12} \\ W\_{21} & W\_{22} \end{pmatrix} \begin{pmatrix} \Gamma\_0\\\Gamma\_1 \end{pmatrix}. \tag{2.5.2}
$$

Then {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} is a boundary triplet for S∗.

(ii) Let {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} be a boundary triplet for S∗. Then there exists a unique operator W ∈ **B**(G × G, G- × G- ) satisfying (2.5.1) such that (2.5.2) holds.

Proof. (i) Note that the operator W ∈ **B**(G×G, G- ×G- ) is unitary from (G<sup>2</sup>, [[·, ·]]G<sup>2</sup> ) to (G-<sup>2</sup>, [[·, ·]]G-<sup>2</sup> ); cf. Proposition 1.8.2. Hence, for f, <sup>g</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> one has

$$\left[\Gamma^{\prime}\widehat{f}, \Gamma^{\prime}\widehat{g}\right]\_{\mathcal{G}^{\prime 2}} = \left[\mathcal{W}\Gamma\widehat{f}, \mathcal{W}\Gamma\widehat{g}\right]\_{\mathcal{G}^{\prime 2}} = \left[\Gamma\widehat{f}, \Gamma\widehat{g}\right]\_{\mathcal{G}^{2}} = \left[\widehat{f}, \widehat{g}\right]\_{\mathcal{G}^{2}}.$$

Therefore, Γ- <sup>0</sup> and Γ- <sup>1</sup> satisfy the abstract Green identity (2.1.1); cf. (2.1.4). Since W is surjective by Proposition 1.8.2, Γ- = WΓ is also surjective thanks to the surjectivity of Γ. It follows that {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} is a boundary triplet for S∗.

(ii) Assume that {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} is a boundary triplet for S<sup>∗</sup> and define a linear relation <sup>W</sup> <sup>⊂</sup> <sup>G</sup><sup>2</sup> <sup>×</sup> <sup>G</sup>-<sup>2</sup> by

$$\mathcal{W} := \{ \{ \Gamma \widehat{f}, \Gamma' \widehat{f} \} : \widehat{f} \in S^\* \}. \tag{2.5.3}$$

It follows from Proposition 2.1.2 (ii) that W is an operator. Indeed, if Γf = 0, then f <sup>∈</sup> <sup>S</sup> and thus Γ- f = 0. For the operator <sup>W</sup> one has dom <sup>W</sup> <sup>=</sup> <sup>G</sup> <sup>×</sup> <sup>G</sup> and ran W = G- × G since ran Γ = G × G and ran Γ- = G- × G- .

Define the inner product [[·, ·]]G-<sup>2</sup> on G-<sup>2</sup> as in (2.1.3). Let ϕ, <sup>ψ</sup> <sup>∈</sup> <sup>G</sup> <sup>×</sup> <sup>G</sup> and let f, <sup>g</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> be such that Γ<sup>f</sup> <sup>=</sup> <sup>ϕ</sup> and Γg <sup>=</sup> <sup>ψ</sup>. Then one has

$$\begin{aligned} \left[\mathcal{W}\widehat{\varphi}, \mathcal{W}\dot{\psi}\right]\_{\mathcal{G}'^2} &= \left[\mathcal{W}\Gamma\widehat{f}, \mathcal{W}\Gamma\widehat{g}\right]\_{\mathcal{G}'^2} = \left[\Gamma'\widehat{f}, \Gamma'\widehat{g}\right]\_{\mathcal{G}'^2} \\ &= \left[\widehat{f}, \widehat{g}\right]\_{\mathcal{G}^2} = \left[\Gamma\widehat{f}, \Gamma\widehat{g}\right]\_{\mathcal{G}^2} = \left[\widehat{\varphi}, \widehat{\psi}\right]\_{\mathcal{G}^2}, \end{aligned}$$

and hence <sup>W</sup> is an isometric operator from (G2, [[·, ·]]G<sup>2</sup> ) to (G-<sup>2</sup>, [[·, ·]]G-<sup>2</sup> ). This implies that the first identity in (2.5.1) is satisfied and from Lemma 1.8.1 it follows that W ∈ **B**(G × G, G- × G- ). Furthermore, as W is surjective, Proposition 1.8.2 implies that also the second identity in (2.5.1) holds.

By the definition (2.5.3) of the operator W, the boundary triplets {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} are connected via (2.5.2). Moreover, the operator W is unique. Indeed, if Γ- = WΓ and Γ- <sup>=</sup> <sup>W</sup> Γ, then (<sup>W</sup> <sup>−</sup> <sup>W</sup> )Γ<sup>f</sup> = 0 for all <sup>f</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> and as ran Γ = <sup>G</sup> <sup>×</sup> <sup>G</sup> it follows that <sup>W</sup> <sup>=</sup> <sup>W</sup> . -

The transformation of a boundary triplet {G, Γ0, Γ1} in Theorem 2.5.1 induces a transformation of the closed relations in the parameter space. Assume that W ∈ **B**(G × G, G- × G- ) satisfies the identities in (2.5.1) and let {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} be the corresponding transformed boundary triplet in (2.5.2). Let Θ be a relation in G and define Θ in Gas a M¨obius transform of Θ by

$$\Theta' = \mathcal{W}[\Theta] = \left\{ \{W\_{11}\varphi + W\_{12}\varphi', W\_{21}\varphi + W\_{22}\varphi' \} : \{\varphi, \varphi'\} \in \Theta \right\};\tag{2.5.4}$$

cf. Definition 1.8.4. As W is bijective, it follows that Θ- = W[Θ] is closed in G if and only if Θ is closed in G; cf. (1.8.10).

**Proposition 2.5.2.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} are boundary triplets for S<sup>∗</sup> connected via Γ- = WΓ in Theorem 2.5.1. Let Θ be a closed relation in G and let Θ be defined by (2.5.4). Then the closed intermediate extensions

$$A\_{\Theta} = \ker\left(\Gamma\_1 - \Theta \Gamma\_0\right) \quad \text{and} \quad A'\_{\Theta'} = \ker\left(\Gamma'\_1 - \Theta' \Gamma'\_0\right).$$

coincide, that is, for f <sup>∈</sup> <sup>S</sup><sup>∗</sup> one has

$$
\Gamma' \widehat{f} \in \Theta' \quad \Leftrightarrow \quad \Gamma \widehat{f} \in \Theta.
$$

Proof. Let f <sup>∈</sup> <sup>S</sup>∗. Then the transformation formulas (2.5.2), (2.5.4), and the fact that W is bijective imply

$$
\Gamma' \widehat{f} \in \Theta' \quad \Leftrightarrow \quad \mathcal{W} \Gamma \widehat{f} \in \mathcal{W}[\Theta] \quad \Leftrightarrow \quad \Gamma \widehat{f} \in \Theta.
$$

Hence, ker (Γ<sup>1</sup> − ΘΓ0) and ker (Γ- <sup>1</sup> − Θ- Γ- <sup>0</sup>) coincide; cf. Theorem 2.1.3 (iii). -

Likewise, the transformation of the boundary triplet leads to a transformation of the γ-field and the Weyl function.

**Proposition 2.5.3.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} are boundary triplets for S<sup>∗</sup> connected via Γ- = WΓ in Theorem 2.5.1. Let A<sup>0</sup> = ker Γ0, A- <sup>0</sup> = ker Γ- <sup>0</sup>, and let γ, γ and M,M be the γ-fields and Weyl functions corresponding to {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- 1}, respectively. Then for all λ ∈ ρ(A0) ∩ ρ(A- <sup>0</sup>) the operator

$$W\_{11} + W\_{12}M(\lambda)$$

is an isomorphism from G onto G- , and the identities

$$
\gamma'(\lambda) = \gamma(\lambda) \left( W\_{11} + W\_{12} M(\lambda) \right)^{-1} \tag{2.5.5}
$$

and

$$M'(\lambda) = \left(W\_{21} + W\_{22}M(\lambda)\right)\left(W\_{11} + W\_{12}M(\lambda)\right)^{-1} \tag{2.5.6}$$

hold.

Proof. For λ ∈ ρ(A0) and f <sup>λ</sup> <sup>∈</sup> <sup>N</sup> <sup>λ</sup>(S∗) one has

$$
\Gamma\_0' \widehat{f}\_\lambda = W\_{11} \Gamma\_0 \widehat{f}\_\lambda + W\_{12} \Gamma\_1 \widehat{f}\_\lambda = \left( W\_{11} + W\_{12} M(\lambda) \right) \Gamma\_0 \widehat{f}\_\lambda,
$$

which leads to

$$
\Gamma\_0' \upharpoonright \widehat{\mathfrak{N}}\_{\lambda}(S^\*) = \left( W\_{11} + W\_{12} M(\lambda) \right) \left( \Gamma\_0 \upharpoonright \widehat{\mathfrak{N}}\_{\lambda}(S^\*) \right). \tag{2.5.7}
$$

If, in addition, λ ∈ ρ(A0) ∩ ρ(A- <sup>0</sup>), then, by Lemma 2.1.7, Γ<sup>0</sup> and Γ- <sup>0</sup> are isomorphisms from <sup>N</sup> <sup>λ</sup>(S∗) onto <sup>G</sup> and <sup>G</sup>- , respectively. Hence, it follows from (2.5.7) that W<sup>11</sup> + W12M(λ) is an isomorphism from G onto G- , and therefore

$$\left(\Gamma\_0'\backslash\widehat{\mathfrak{N}}\_{\lambda}(S^\*)\right)^{-1} = \left(\Gamma\_0\backslash\widehat{\mathfrak{N}}\_{\lambda}(S^\*)\right)^{-1}\left(W\_{11} + W\_{12}M(\lambda)\right)^{-1}.$$

If one applies the orthogonal projection π<sup>1</sup> from H×H onto H× {0} to both sides, then (2.5.5) follows. Similarly, for λ ∈ ρ(A0) ∩ ρ(A- <sup>0</sup>) one finds that

$$\begin{aligned} \left(W\_{21} + W\_{22}M(\lambda)\right) \left(W\_{11} + W\_{12}M(\lambda)\right)^{-1} \Gamma\_0' \widehat{f}\_{\lambda} &= \left(W\_{21} + W\_{22}M(\lambda)\right) \Gamma\_0 \widehat{f}\_{\lambda} \\ &= W\_{21} \Gamma\_0 \widehat{f}\_{\lambda} + W\_{22} \Gamma\_1 \widehat{f}\_{\lambda} \\ &= \Gamma\_1' \widehat{f}\_{\lambda}, \end{aligned}$$

which yields (2.5.6). -

Next some special transformations of boundary triplets and Weyl functions will be discussed. In the first corollary the boundary mappings are interchanged via a flip-flop, which leads to the Weyl function <sup>−</sup>M−1.

**Corollary 2.5.4.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> with γ-field γ and Weyl function M, and define

$$
\Gamma\_0' = \Gamma\_1 \quad \text{and} \quad \Gamma\_1' = -\Gamma\_0.
$$

Then {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} is a boundary triplet for S<sup>∗</sup> and ker Γ- <sup>0</sup> = ker Γ1. Moreover, for λ ∈ ρ(A0)∩ρ(A1) the corresponding γ-field γ and the Weyl function M are given by

$$
\gamma'(\lambda) = \gamma(\lambda)M(\lambda)^{-1} \quad \text{and} \quad M'(\lambda) = -M(\lambda)^{-1},
$$

respectively.

Proof. The operator

$$\mathcal{W} = \begin{pmatrix} 0 & I \\ -I & 0 \end{pmatrix} \in \mathbf{B}(\mathcal{G} \times \mathcal{G})$$

satisfies both identities in (2.5.1). Now the assertions follow from Theorem 2.5.1 and Proposition 2.5.3. -

The second corollary treats the situation in which a bijective operator D dilates the Weyl function M and a self-adjoint operator P produces a shift of the dilated Weyl function D∗MD.

**Corollary 2.5.5.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> with γ-field γ and Weyl function M. Let G be a Hilbert space, let D ∈ **B**(G- , G) be boundedly invertible, let P ∈ **B**(G- ) be self-adjoint, and define

$$
\Gamma\_0' = D^{-1} \Gamma\_0 \quad \text{and} \quad \Gamma\_1' = D^\* \Gamma\_1 + PD^{-1} \Gamma\_0.
$$

Then {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} is a boundary triplet for S<sup>∗</sup> and ker Γ- <sup>0</sup> = ker Γ0. Moreover, for λ ∈ ρ(A0) the corresponding γ-field γ and the Weyl function Mare given by

$$
\gamma'(\lambda) = \gamma(\lambda)D \quad \text{and} \quad M'(\lambda) = D^\*M(\lambda)D + P,\tag{2.5.8}
$$

respectively.

Proof. It is not difficult to check that the operator

$$\mathcal{W} = \begin{pmatrix} D^{-1} & 0 \\ PD^{-1} & D^\* \end{pmatrix} \in \mathbf{B}(\mathcal{G} \times \mathcal{G}, \mathcal{G}' \times \mathcal{G}')$$

satisfies both identities in (2.5.1). Now the assertions follow from Theorem 2.5.1 and Proposition 2.5.3. -

The next corollary complements Corollary 2.5.5.

**Corollary 2.5.6.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} are boundary triplets for S<sup>∗</sup> such that

$$\ker \Gamma\_0 = \ker \Gamma'\_0.$$

Then there exist a boundedly invertible operator D ∈ **B**(G- , G) and a self-adjoint operator P ∈ **B**(G- ) such that

$$
\Gamma\_0' = D^{-1} \Gamma\_0 \quad \text{and} \quad \Gamma\_1' = D^\* \Gamma\_1 + PD^{-1} \Gamma\_0. \tag{2.5.9}
$$

In particular, the γ-fields and Weyl functions corresponding to the boundary triplets {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} satisfy (2.5.8).

Proof. It follows from Theorem 2.5.1 that there exists W ∈ **B**(G × G, G- × G- ) with the properties (2.5.1) such that

$$
\begin{pmatrix} \Gamma\_0'\\\Gamma\_1' \end{pmatrix} = \mathcal{W} \begin{pmatrix} \Gamma\_0\\\Gamma\_1 \end{pmatrix} = \begin{pmatrix} W\_{11} & W\_{12} \\ W\_{21} & W\_{22} \end{pmatrix} \begin{pmatrix} \Gamma\_0\\\Gamma\_1 \end{pmatrix}. \tag{2.5.10}
$$

The assumption ker Γ<sup>0</sup> = ker Γ- <sup>0</sup> implies W<sup>12</sup> = 0. In fact, if f <sup>∈</sup> ker Γ<sup>0</sup> = ker Γ- 0, then W12Γ1f = 0 by (2.5.10) and hence Proposition 2.1.2 (i) implies <sup>W</sup><sup>12</sup> = 0. Therefore, the first identity W∗JG-W = J<sup>G</sup> in (2.5.1) means that

$$W\_{11}^\* W\_{22} = I\_{\mathbb{S}},\ W\_{22}^\* W\_{11} = I\_{\mathbb{S}},\ W\_{11}^\* W\_{21} = W\_{21}^\* W\_{11}.$$

Likewise, the second equality WJGW<sup>∗</sup> = JGin (2.5.1) means that

$$W\_{11}W\_{22}^\* = I\_{\mathbb{S}'},\ W\_{22}W\_{11}^\* = I\_{\mathbb{S}'},\ W\_{21}W\_{22}^\* = W\_{22}W\_{21}^\*.$$

It follows that D := W<sup>∗</sup> <sup>22</sup> ∈ **B**(G- , G) is boundedly invertible with D−<sup>1</sup> = W11, the operator P := W21D ∈ **B**(G- ) is self-adjoint and (2.5.9) is satisfied. -

A combination of Corollary 2.5.5 with G = G- , D = I, P = −Θ, and Corollary 2.5.4 leads to the followings statement.

**Corollary 2.5.7.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> with γ-field γ and Weyl function M. Let Θ ∈ **B**(G) be self-adjoint, and define

$$
\Gamma\_0' = \Gamma\_1 - \Theta \Gamma\_0 \quad \text{and} \quad \Gamma\_1' = -\Gamma\_0.
$$

Then {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} is a boundary triplet for S<sup>∗</sup> and ker Γ- <sup>0</sup> = ker (Γ<sup>1</sup> − ΘΓ0) = A<sup>Θ</sup> holds. Moreover, for λ ∈ ρ(A0) ∩ ρ(AΘ) the corresponding γ-field γ and the Weyl function Mare given by

$$
\gamma'(\lambda) = -\gamma(\lambda)(\Theta - M(\lambda))^{-1} \quad \text{and} \quad M'(\lambda) = (\Theta - M(\lambda))^{-1},
$$

respectively.

The following statement is also a direct consequence of Corollary 2.5.5.

**Corollary 2.5.8.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> with γ-field γ and Weyl function M. Let <sup>Q</sup>(λ) <sup>∈</sup> **<sup>B</sup>**(G), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, be a family of operators which satisfy

$$\frac{Q(\lambda) - Q(\mu)^{\*}}{\lambda - \overline{\mu}} = \gamma(\mu)^{\*}\gamma(\lambda), \quad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}. \tag{2.5.11}$$

Let <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and define the self-adjoint operator <sup>P</sup> <sup>∈</sup> **<sup>B</sup>**(G) by

$$P = \operatorname{Re} Q(\lambda\_0) - \operatorname{Re} M(\lambda\_0).$$

Then Q is the Weyl function corresponding to the boundary triplet {G, Γ- <sup>0</sup>, Γ- 1}, where

$$
\Gamma\_0' = \Gamma\_0 \quad \text{and} \quad \Gamma\_1' = \Gamma\_1 + P\Gamma\_0,
$$

and Q(λ) = M(λ) + P holds for all λ ∈ ρ(A0).

Proof. Due to Proposition 2.3.6 (iii) it follows from the identity (2.5.11) that

$$Q(\lambda) - Q(\lambda\_0)^\* = M(\lambda) - M(\lambda\_0)^\*, \qquad \lambda, \lambda\_0 \in \mathbb{C} \backslash \mathbb{R},$$

and, in particular, Im Q(λ0) = Im M(λ0). Hence, one obtains

$$Q(\lambda) - M(\lambda) = Q(\lambda\_0)^\* - M(\lambda\_0)^\* = \operatorname{Re} Q(\lambda\_0) - \operatorname{Re} M(\lambda\_0) = P.$$

With the choice D = I and P as above, the result follows from Corollary 2.5.5. -

Now it will be shown that a pair of transversal self-adjoint extensions induces a boundary triplet {G, Γ0, Γ1} which determines these extensions via the boundary conditions ker Γ<sup>0</sup> and ker Γ1. The following theorem is a consequence of Theorem 2.4.1 and Corollary 2.5.5.

**Theorem 2.5.9.** Let S be a closed symmetric relation in H and assume that H and Hare transversal self-adjoint extensions of S in H, that is,

$$S^\* = H \stackrel{\frown}{+} H'$$

holds; cf. Lemma 1.7.7. Then there exists a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that

$$H = \ker \Gamma\_0 \quad \text{and} \quad H' = \ker \Gamma\_1. \tag{2.5.12}$$

Proof. As H is a self-adjoint extension of S, there is a boundary triplet {G, Υ0, Υ1} for S<sup>∗</sup> such that H = ker Υ0; cf. Theorem 2.4.1. Since H is a self-adjoint extension of S it follows from Corollary 2.1.4 (v) that there exists a self-adjoint relation Θ in G such that

$$H' = \ker\left(\Upsilon\_1 - \Theta \Upsilon\_0\right).$$

Furthermore, since H and H are transversal, it follows from Proposition 2.1.8 (ii) that Θ ∈ **B**(G). Now define

$$
\begin{pmatrix} \Gamma\_0 \\ \Gamma\_1 \end{pmatrix} = \begin{pmatrix} I & 0 \\ -\Theta & I \end{pmatrix} \begin{pmatrix} \Upsilon\_0 \\ \Upsilon\_1 \end{pmatrix},
$$

so that {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> by Corollary 2.5.5. By construction, the boundary triplet {G, <sup>Γ</sup>0, <sup>Γ</sup>1} has the properties (2.5.12). -

**Corollary 2.5.10.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} is a boundary triplet for S∗. Let A<sup>Θ</sup> = ker (Γ<sup>1</sup> − ΘΓ0) be a self-adjoint extension of S corresponding to the self-adjoint relation Θ in G via (2.1.5). Then there exists a boundary triplet {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} such that

$$\ker \Gamma\_0' = \ker \left(\Gamma\_1 - \Theta \Gamma\_0\right) \quad \text{and} \quad \ker \Gamma\_1' = \ker \left(\Gamma\_1 + \Theta^{-1} \Gamma\_0\right), \tag{2.5.13}$$

that is, A- <sup>0</sup> = A<sup>Θ</sup> and A- <sup>1</sup> = A−Θ−<sup>1</sup> .

Proof. Recall that Θ<sup>∗</sup> = (JΘ)⊥, where J denotes the flip-flop operator in G<sup>2</sup> from (1.3.1). Since Θ is self-adjoint it follows that Θ = (JΘ)<sup>⊥</sup> or, in other words,

$$
\mathcal{G}^2 = \Theta \oplus J\Theta.
$$

In particular, one sees that Θ and <sup>J</sup>Θ = <sup>−</sup>Θ−<sup>1</sup> are transversal in <sup>G</sup>. Therefore, the self-adjoint extensions ker (Γ<sup>1</sup> <sup>−</sup> ΘΓ0) and ker (Γ<sup>1</sup> + Θ−1Γ0) are transversal; cf. Lemma 2.1.5 (ii). Now the assertion follows from Theorem 2.5.9. -

Corollary 2.5.10 is concerned with the existence of the boundary triplet {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} with the properties (2.5.13). In fact, it is possible to explicitly construct such a boundary triplet via the choice of an appropriate operator W such that Γ-= WΓ; cf. Corollary 2.5.7 for the special case Θ ∈ **B**(G).

**Corollary 2.5.11.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} is a boundary triplet for S∗. Let Θ be a self-adjoint relation in G and choose A, B ∈ **B**(G) such that

$$\Theta = \left\{ \{ \mathcal{A}\varphi, \mathcal{B}\varphi \} : \varphi \in \mathcal{G} \right\}$$

and the identities

$$\mathcal{A}^\* \mathcal{B} = \mathcal{B}^\* \mathcal{A}, \quad \mathcal{A} \mathcal{B}^\* = \mathcal{B} \mathcal{A}^\*, \quad \mathcal{A}^\* \mathcal{A} + \mathcal{B}^\* \mathcal{B} = I = \mathcal{A} \mathcal{A}^\* + \mathcal{B} \mathcal{B}^\*, \tag{2.5.14}$$

hold; cf. Corollary 1.10.9. Then {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>}, where

$$
\Gamma\_0' = \mathcal{B}^\* \Gamma\_0 - \mathcal{A}^\* \Gamma\_1 \quad \text{and} \quad \Gamma\_1' = \mathcal{A}^\* \Gamma\_0 + \mathcal{B}^\* \Gamma\_1,\tag{2.5.15}
$$

is a boundary triplet for S<sup>∗</sup> such that both identities in (2.5.13) hold, that is, A- <sup>0</sup> = A<sup>Θ</sup> and A- <sup>1</sup> = A−Θ−<sup>1</sup> .

Proof. It is not difficult to check that

$$\mathcal{W} = \begin{pmatrix} \mathcal{B}^\* & -\mathcal{A}^\* \\ \mathcal{A}^\* & \mathcal{B}^\* \end{pmatrix} \in \mathbf{B}(\mathcal{G} \times \mathcal{G}) \tag{2.5.16}$$

satisfies (2.5.1) and it is clear from (2.5.15) that {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} and {G, Γ0, Γ1} are connected via W as in Theorem 2.5.1 (i). Hence, {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} is a boundary triplet for S∗. It follows from (2.5.14) that

$$\begin{aligned} \mathcal{W}[\Theta] &= \left\{ \{ \mathcal{B}^\* \mathcal{A} \varphi - \mathcal{A}^\* \mathcal{B} \varphi, \mathcal{A}^\* \mathcal{A} \varphi + \mathcal{B}^\* \mathcal{B} \varphi \} : \{ \mathcal{A} \varphi, \mathcal{B} \varphi \} \in \Theta \right\}, \\ &= \{ 0 \} \times \mathcal{G}, \end{aligned}$$

and since <sup>−</sup>Θ−<sup>1</sup> <sup>=</sup> {{Bϕ, <sup>−</sup>Aϕ} : <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>}, it follows in the same way that

$$\begin{split} \mathcal{W}[-\Theta^{-1}] &= \left\{ \left\{ \mathcal{B}^\* \mathcal{B} \varphi + \mathcal{A}^\* \mathcal{A} \varphi, \mathcal{A}^\* \mathcal{B} \varphi - \mathcal{B}^\* \mathcal{A} \varphi \right\} : \left\{ \mathcal{B} \varphi, -\mathcal{A} \varphi \right\} \in -\Theta^{-1} \right\} \\ &= \mathcal{G} \times \{0\}. \end{split}$$

Recall from Proposition 2.5.2 that

$$
\Gamma' \widehat{f} \in \mathcal{W}[\Xi] \quad \Leftrightarrow \quad \Gamma \widehat{f} \in \Xi, \qquad \widehat{f} \in S^\*,
$$

holds for any closed relation Ξ in <sup>G</sup>. With Ξ = Θ and Ξ = <sup>−</sup>Θ−<sup>1</sup> one then has

$$
\Gamma' \widehat{f} \in \{0\} \times \mathcal{G} \quad \Leftrightarrow \quad \Gamma \widehat{f} \in \Theta, \qquad \widehat{f} \in S^\*,
$$

and

$$
\Gamma' \widehat{f} \in \mathcal{G} \times \{0\} \quad \Leftrightarrow \quad \Gamma \widehat{f} \in -\Theta^{-1}, \qquad \widehat{f} \in S^\*,
$$

respectively. Now (2.1.11)–(2.1.12) imply

$$A\_0' = \ker \Gamma\_0' = \ker \left(\Gamma\_1 - \Theta \Gamma\_0\right) = A\_\Theta$$

and

$$A\_1' = \ker \Gamma\_1' = \ker \left(\Gamma\_1 + \Theta^{-1} \Gamma\_0\right) = A\_{-\Theta^{-1}}.\tag{7}$$

Assume that the boundary triplets {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} are as in Corollary 2.5.11 and let γ and M, and γ and M- , be the corresponding γ-fields and Weyl functions, respectively. Then it follows from Proposition 2.5.3 that for all λ ∈ ρ(A0) ∩ ρ(A- <sup>0</sup>) one has

$$\gamma'(\lambda) = \gamma(\lambda) \left( \mathcal{B}^\* - \mathcal{A}^\* M(\lambda) \right)^{-1} \tag{2.5.17}$$

and

$$M'(\lambda) = \left(\mathcal{A}^\* + \mathcal{B}^\* M(\lambda)\right) \left(\mathcal{B}^\* - \mathcal{A}^\* M(\lambda)\right)^{-1}.\tag{2.5.18}$$

In the special case where the defect numbers of S are (1, 1) one may choose

$$\mathcal{A} = \frac{1}{\sqrt{s^2 + 1}} \quad \text{and} \quad \mathcal{B} = \frac{s}{\sqrt{s^2 + 1}}, \qquad s \in \mathbb{R} \cup \{\infty\},$$

where A = 0 and B = 1 if s = ∞; this interpretation will be used also in the following. With this choice of A and B the operator in (2.5.16) reduces to the 2 × 2-matrix

$$\mathcal{W} = \frac{1}{\sqrt{s^2 + 1}} \begin{pmatrix} s & -1 \\ 1 & s \end{pmatrix}, \quad s \in \mathbb{R} \cup \{\infty\}.$$

In this case

$$\Gamma\_0' = \frac{1}{\sqrt{s^2 + 1}} \left( s\Gamma\_0 - \Gamma\_1 \right), \quad \Gamma\_1' = \frac{1}{\sqrt{s^2 + 1}} \left( \Gamma\_0 + s\Gamma\_1 \right), \quad s \in \mathbb{R} \cup \{\infty\}, \tag{2.5.19}$$

and for λ ∈ ρ(A0) ∩ ρ(A- <sup>0</sup>) the corresponding γ-field and Weyl function are given by

$$\gamma'(\lambda) = \frac{\sqrt{s^2 + 1}}{s - M(\lambda)} \gamma(\lambda) \quad \text{and} \quad M'(\lambda) = \frac{1 + sM(\lambda)}{s - M(\lambda)}, \quad s \in \mathbb{R} \cup \{\infty\}. \tag{2.5.20}$$

Now let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} is a boundary triplet for S∗. Consider a closed symmetric extension S of S with the property S- ⊂ A<sup>0</sup> = ker Γ0. Then the boundary triplet {G, Γ0, Γ1} can be restricted to (S- )<sup>∗</sup> ⊂ S<sup>∗</sup> and A<sup>0</sup> coincides with the kernel of the restriction of Γ0. The Weyl function corresponding to this restricted boundary triplet is a compression of the original Weyl function onto a subspace of G. In the following proposition this is made precise from the point of view of an orthogonal decomposition of G.

**Proposition 2.5.12.** Let S be a closed symmetric relation in H and assume that {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ0, and corresponding γfield γ and Weyl function M. Assume that G has the orthogonal decomposition

$$
\mathcal{G} = \mathcal{G}' \oplus \mathcal{G}'' \tag{2.5.21}
$$

with corresponding orthogonal projections P and P- and canonical embedding ι - . Then the following statements hold:

(i) The relation

$$S' = \left\{ \widehat{f} \in S^\* \, : \, \Gamma\_0 \widehat{f} = 0, \,\, P' \Gamma\_1 \widehat{f} = 0 \right\} \tag{2.5.22}$$

is closed and symmetric with S ⊂ S-⊂ A0.

(ii) The adjoint (S- )<sup>∗</sup> of Sis given by

$$(S')^\* = \{ \widehat{f} \in S^\* \, : \, P^{\prime\prime} \Gamma\_0 \widehat{f} = 0 \}.$$

(iii) The triplet {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>}, where

$$
\Gamma\_0' = \Gamma\_0 \upharpoonright (S')^\* \quad \text{and} \quad \Gamma\_1' = P'\Gamma\_1 \upharpoonright (S')^\*,
$$

is a boundary triplet for (S- )<sup>∗</sup> such that A<sup>0</sup> = ker Γ- 0.

(iv) The γ-field γ and Weyl function M corresponding to the boundary triplet {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} are given by

$$
\gamma'(\lambda) = \gamma(\lambda)\iota' \quad \text{and} \quad M'(\lambda) = P'M(\lambda)\iota', \quad \lambda \in \rho(A\_0).
$$

Moreover, for every closed symmetric extension S with S ⊂ S- ⊂ A<sup>0</sup> there exists an orthogonal decomposition (2.5.21) of G such that (2.5.22) holds.

Proof. (i) & (ii) It is clear from the definition that S ⊂ S- ⊂ A<sup>0</sup> and that S can be written as

$$S' = \left\{ \widehat{f} \in S^\* \, : \, \Gamma \widehat{f} \in \{0\} \times \mathcal{G}'' \right\}.$$

Hence, S- = A<sup>Θ</sup> when Θ = {0} × G--. It follows that S is closed, and by (1.3.4) one has Θ<sup>∗</sup> = G-× G, so that Theorem 2.1.3 (iv) shows

$$(S')^\* = \left\{ \widehat{f} \in S^\* \, : \, \Gamma \widehat{f} \in \mathcal{G}' \times \mathcal{G} \right\} = \left\{ \widehat{f} \in S^\* \, : \, P' \Gamma\_0 \widehat{f} = 0 \right\}.$$

(iii) With the choice f, <sup>g</sup> <sup>∈</sup> (S- )<sup>∗</sup> one has Γ0f <sup>=</sup> <sup>P</sup>- Γ0f and Γ0g <sup>=</sup> <sup>P</sup>- <sup>Γ</sup>0g. Then (2.1.1) yields

$$\begin{split} (f',g)\_{\mathfrak{H}} - (f,g')\_{\mathfrak{H}} &= (\Gamma\_1 \widehat{f}, \Gamma\_0 \widehat{g})\_{\mathfrak{G}} - (\Gamma\_0 \widehat{f}, \Gamma\_1 \widehat{g})\_{\mathfrak{G}} \\ &= (\Gamma\_1 \widehat{f}, P' \Gamma\_0 \widehat{g})\_{\mathfrak{G}'} - (P' \Gamma\_0 \widehat{f}, \Gamma\_1 \widehat{g})\_{\mathfrak{G}'} \\ &= (\Gamma\_1' \widehat{f}, \Gamma\_0' \widehat{g})\_{\mathfrak{G}'} - (\Gamma\_0' \widehat{f}, \Gamma\_1' \widehat{g})\_{\mathfrak{G}'}. \end{split} \tag{2.5.23}$$

It follows from the surjectivity of Γ and the identity Γ0f <sup>=</sup> <sup>P</sup>- Γ0f when <sup>f</sup> <sup>∈</sup> (S- )∗, that

$$\Gamma' = \begin{pmatrix} \Gamma\_0 \upharpoonright (S')^\* \\ P' \Gamma\_1 \upharpoonright (S')^\* \end{pmatrix} = \begin{pmatrix} P' \Gamma\_0 \upharpoonright (S')^\* \\ P' \Gamma\_1 \upharpoonright (S')^\* \end{pmatrix} : (S')^\* \to \begin{pmatrix} \mathcal{G}' \\ \mathcal{G}' \end{pmatrix}$$

maps (S- )<sup>∗</sup> onto G- × G- . Together with (2.5.23) this shows that {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} is a boundary triplet for (S- )∗. It is clear that A<sup>0</sup> = ker Γ- <sup>0</sup> holds.

(iv) Since Γ<sup>0</sup> maps <sup>N</sup> <sup>λ</sup>(S∗), <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0), one-to-one onto <sup>G</sup>, the restriction Γ- <sup>0</sup> maps <sup>N</sup> <sup>λ</sup>((S- )∗), λ ∈ ρ(A0), one-to-one onto G- . Hence, (2.3.1) implies that

$$\rho(A\_0) \ni \lambda \mapsto \gamma'(\lambda) = \left\{ \{ \Gamma\_0' \widehat{f}\_\lambda, f\_\lambda \} \, : \, \widehat{f}\_\lambda \in \widehat{\mathfrak{W}}\_\lambda((S')^\*) \right\}$$

and therefore γ- (λ) = γ(λ)ι - . It follows from (2.3.4) that

$$\rho(A\_0) \ni \lambda \mapsto M'(\lambda) = \left\{ \{ \Gamma'\_0 \dot{f}\_\lambda, \Gamma'\_1 \dot{f}\_\lambda \} \, : \, \dot{f}\_\lambda \in \widehat{\mathfrak{N}}\_\lambda((S')^\*) \right\},$$

which shows that M- (λ) = P- M(λ)ι - .

Finally, if S is a closed symmetric extension of S with the property S- ⊂ A0, then S- = A<sup>Θ</sup> for some closed symmetric relation Θ in G such that Θ ⊂ {0} × G by Theorem 2.1.3 (v). As Θ is closed, there exists a closed subspace G-- ⊂ G such that Θ = {0} × G--. With G- = (G--)⊥ it is clear that the orthogonal decomposition (2.5.21) of G holds and S is of the form (2.5.22). -

In the situation of Proposition 2.5.12 the intermediate extensions of S can also be interpreted as intermediate extensions of S. In the next corollary the connection between these extensions relative to the appropriate boundary triplets is explained.

**Corollary 2.5.13.** Assume that the parameter space G has the orthogonal decomposition (2.5.21)and let S be as in Proposition 2.5.12. Let Θ be a closed relation in Gand let Θ be the closed linear relation in G defined by

$$
\Theta = \Theta' \oplus (\{0\} \times \mathcal{G}''). \tag{2.5.24}
$$

For the intermediate extensions induced by Θ and Θone has

$$\left\{ \widehat{f} \in S^\* \, : \, \Gamma \widehat{f} \in \Theta \right\} = \left\{ \widehat{f} \in (S')^\* \, : \, \Gamma' \widehat{f} \in \Theta' \right\} \tag{2.5.25}$$

or, equivalently, ker (Γ<sup>1</sup> − ΘΓ0) = ker (Γ- <sup>1</sup> − Θ- Γ- 0).

Proof. It is clear that the relation Θ defined in (2.5.24) is a closed relation in G. The identity in (2.5.25) now follows from

$$\begin{aligned} \left\{ \widehat{f} \in S^{\*} : \Gamma \widehat{f} \in \Theta \right\} &= \left\{ \widehat{f} \in S^{\*} : \Gamma \widehat{f} \in \Theta' \,\,\, \widehat{\oplus} \left( \{0\} \times \mathbb{S}' \right) \right\} \\ &= \left\{ \widehat{f} \in S^{\*} : \left\{ P' \Gamma\_{0} \widehat{f}, P' \Gamma\_{1} \widehat{f} \right\} \in \Theta', \, P' \Gamma\_{0} \widehat{f} = 0 \right\} \\ &= \left\{ \widehat{f} \in (S')^{\*} : \left\{ \Gamma\_{0} \widehat{f}, P' \Gamma\_{1} \widehat{f} \right\} \in \Theta' \right\} \\ &= \left\{ \widehat{f} \in (S')^{\*} : \Gamma' \widehat{f} \in \Theta' \right\}, \end{aligned}$$

where (2.5.24) has been used in conjunction with the boundary triplet in Proposition 2.5.12. -

Let S and S be closed symmetric relations in H and H which are unitarily equivalent, and let {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} be boundary triplets for S<sup>∗</sup> and (S- )∗, respectively. The notion of unitary equivalence for these boundary triplets will now be introduced which leads to unitary equivalence of the corresponding extensions, γ-fields and Weyl functions.

Let H and H be Hilbert spaces and let U ∈ **B**(H, H- ) be a unitary operator from H onto H- . Let S and S be closed symmetric relations in H and H- , respectively, such that they are unitarily equivalent by means of U, that is,

$$S' = \left\{ \{Uf, Uf'\} : \{f, f'\} \in S \right\} \tag{2.5.26}$$

in the sense of Definition 1.3.7. It follows from (1.3.7) that this assumption is equivalent to S<sup>∗</sup> and (S- )<sup>∗</sup> being equivalent under U,

$$(S')^\* = \left\{ \{Uf, Uf'\} \, : \, \{f, f'\} \in S^\* \right\}.$$

Then U maps Nλ(S∗) unitarily onto Nλ((S- )∗), and hence

$$\bar{\mathfrak{M}}\_{\lambda}((S')^\*) = \left\{ \{Uf\_{\lambda}, \lambda Uf\_{\lambda}\} : \{f, \lambda f\_{\lambda}\} \in S^\* \right\}.$$

Furthermore, let V ∈ **B**(G, G- ) be a unitary mapping from G onto G- . Then the closed relations Θ in G and Θ in Gare unitarily equivalent if

$$\Theta' = \left\{ \{Vf, Vf'\} : \{f, f'\} \in \Theta \right\}.\tag{2.5.27}$$

The notion of unitary equivalence of two boundary triplets involves not only the unitary equivalence between H and H- , but also the unitary equivalence between G and G- .

**Definition 2.5.14.** Let S and S be closed symmetric relations in H and H- , and let {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} be boundary triplets for S<sup>∗</sup> and (S- )∗, respectively. Then {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} are said to be unitarily equivalent if there exist a unitary operator U ∈ **B**(H, H- ) and a unitary operator V ∈ **B**(G, G- ) such that


$$\Gamma\_0' \{ Uf, Uf' \} = V \Gamma\_0 \{ f, f' \} \quad \text{and} \quad \Gamma\_1' \{ Uf, Uf' \} = V \Gamma\_1 \{ f, f' \} \tag{2.5.28}$$
  $\text{for all } \{ f, f' \} \in S^\*.$ 

In the next proposition it will be shown that for unitarily equivalent boundary triplets the corresponding closed extensions, γ-fields, and Weyl functions are unitarily equivalent.

**Proposition 2.5.15.** Let S and S be closed symmetric relations in H and H- , and let {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} be boundary triplets for S<sup>∗</sup> and (S- )∗, respectively, which are unitarily equivalent by means of the unitary operators U ∈ **B**(H, H- ) and V ∈ **B**(G, G- ). Then the following statements hold:

(i) For all closed relations Θ in G and Θ in G connected via (2.5.27) the closed extensions

$$A\_{\Theta} = \left\{ \widehat{f} \in S^\* \, : \, \Gamma \widehat{f} \in \Theta \right\} \quad \text{and} \quad A'\_{\Theta'} = \left\{ \widehat{h} \in (S')^\* \, : \, \Gamma' \widehat{h} \in \Theta' \right\}.$$

are unitarily equivalent by means of U ∈ **B**(H, H- ), that is,

$$A'\_{\Theta'} = \left\{ \{Uf, Uf'\} \, : \, \{f, f'\} \in A\_{\Theta} \right\}.$$

(ii) The γ-fields γ and γ corresponding to {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} are related by

$$
\gamma'(\lambda) = U\gamma(\lambda)V^{-1}, \quad \lambda \in \rho(A\_0) = \rho(A\_0').
$$

(iii) The Weyl functions M and M corresponding to {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- 1} are related by

$$M'(\lambda) = V M(\lambda) V^{-1}, \quad \lambda \in \rho(A\_0) = \rho(A\_0').$$

Proof. (i) It follows from the definition that

$$A\_{\Theta} = \left\{ \{f, f'\} \in S^\* \,:\, \Gamma\{f, f'\} \in \Theta \right\}.$$

and

$$A'\_{\Theta'} = \left\{ \{g, g'\} \in (S')^\* \, : \, \Gamma' \{g, g'\} \in \Theta' \right\}.$$

Since S and S are unitarily equivalent, so are S<sup>∗</sup> and (S- )∗, and hence one has {g, g- } ∈ (S- )<sup>∗</sup> if and only if {g, g- } = {Uf,Uf- } for some {f,f- } ∈ S∗. Thus, by the unitary equivalence of the boundary triplets one obtains

$$\begin{split} A'\_{\Theta'} &= \left\{ \{Uf, Uf'\} : \{f, f'\} \in S^\*, \ \{\Gamma'\_0\{Uf, Uf'\}, \Gamma'\_1\{Uf, Uf'\}\} \in \Theta' \right\} \\ &= \left\{ \{Uf, Uf'\} : \{f, f'\} \in S^\*, \ \{V\Gamma\_0\{f, f'\}, V\Gamma\_1\{f, f'\}\} \in \Theta' \right\} \\ &= \left\{ \{Uf, Uf'\} : \{f, f'\} \in S^\*, \ \{\Gamma\_0\{f, f'\}, \Gamma\_1\{f, f'\}\} \in \Theta \right\} \\ &= \left\{ \{Uf, Uf'\} : \{f, f'\} \in A\_{\Theta} \right\}. \end{split}$$

(ii) By item (i), the self-adjoint relations A<sup>0</sup> = ker Γ<sup>0</sup> and A- <sup>0</sup> = ker Γ- <sup>0</sup> are unitarily equivalent by means of U, which implies that ρ(A0) = ρ(A- <sup>0</sup>), and hence the γ-fields γ and γare defined on the same subset of <sup>C</sup>. For <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) one computes

$$\begin{split} \gamma'(\lambda) &= \left\{ \{ \Gamma\_0' \{ g\_\lambda, \lambda g\_\lambda \}, g\_\lambda \} : \{ g\_\lambda, \lambda g\_\lambda \} \in \hat{\mathfrak{N}}\_\lambda((S^\prime)^\*) \right\} \\ &= \left\{ \{ \Gamma\_0' \{ U f\_\lambda, \lambda U f\_\lambda \}, U f\_\lambda \} : \{ f\_\lambda, \lambda f\_\lambda \} \in \hat{\mathfrak{N}}\_\lambda(S^\*) \right\} \\ &= \left\{ \{ V \Gamma\_0 \{ f\_\lambda, \lambda f\_\lambda \}, U f\_\lambda \} : \{ f\_\lambda, \lambda f\_\lambda \} \in \hat{\mathfrak{N}}\_\lambda(S^\*) \right\} \\ &= U \gamma(\lambda) V^{-1} . \end{split}$$

(iii) Since ρ(A0) = ρ(A- <sup>0</sup>), the Weyl functions M and M are defined on the same subset of <sup>C</sup>. Fix <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0), let <sup>ψ</sup> <sup>∈</sup> <sup>G</sup> and choose {fλ, λfλ} ∈ <sup>N</sup> <sup>λ</sup>(S∗) such that Γ- <sup>0</sup>{Ufλ, λUfλ} <sup>=</sup> <sup>ψ</sup>. Since {Ufλ, λUfλ} ∈ <sup>N</sup> <sup>λ</sup>((S- )∗), it follows from the definition of the Weyl function and (2.5.28) that

$$\begin{aligned} M'(\lambda)\psi &= M'(\lambda)\Gamma\_0'\{Uf\_\lambda,\lambda Uf\_\lambda\} \\ &= \Gamma\_1'\{Uf\_\lambda,\lambda Uf\_\lambda\} \\ &= V\Gamma\_1\{f\_\lambda,\lambda f\_\lambda\} \\ &= V M(\lambda)\Gamma\_0\{f\_\lambda,\lambda f\_\lambda\} \\ &= V M(\lambda)V^{-1}\Gamma\_0'\{Uf\_\lambda,\lambda Uf\_\lambda\} \\ &= V M(\lambda)V^{-1}\psi. \end{aligned}$$

This yields M- (λ) = V M(λ)<sup>V</sup> <sup>−</sup><sup>1</sup> for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0). -

Let S and S be closed symmetric relations in H and H with boundary triplets {G, Γ0, Γ1} and {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} that are unitarily equivalent by means of a unitary operator U ∈ **B**(H, H- ) and a unitary operator V ∈ **B**(G, G- ) as in Proposition 2.5.15. Then according to Proposition 2.5.15 one has that

$$A'\_0 = \left\{ \{Uf, Uf'\} \, : \, \{f, f'\} \in A\_0 \right\};$$

cf. (2.5.28). In particular, this implies that the multivalued parts of A<sup>0</sup> and A- <sup>0</sup> are connected by

$$\operatorname{mul}\,A\_0' = U(\operatorname{mul}\,A\_0),$$

and since U is unitary it also follows that

$$
\overline{\text{dom}}\,A'\_0 = U(\overline{\text{dom}}\,A\_0).
$$

The following corollary is an immediate consequence of Proposition 2.5.15 (ii) and Proposition 2.3.2 (ii).

**Corollary 2.5.16.** Let P and P be the orthogonal projections in H and H onto (mul A0)<sup>⊥</sup> and (mul A- <sup>0</sup>)⊥, respectively. Then

$$
\begin{pmatrix} P'\gamma'(\lambda) \\ (I - P')\gamma'(\lambda) \end{pmatrix} = U \begin{pmatrix} P\gamma(\lambda) \\ (I - P)\gamma(\lambda) \end{pmatrix} V^{-1}, \quad \lambda \in \rho(A\_0) = \rho(A'\_0),
$$

where (I − P)γ(λ)=(I − P)γ(λ0) and (I − P- )γ- (λ)=(I − P- )γ- (λ0) are parts that do not depend on λ.

In Theorem 4.2.6 it will be shown that if S and S are simple (see Section 3.4) and their Weyl functions are unitarily equivalent, then in fact the corresponding boundary triplets are unitarily equivalent.

## **2.6 Kre˘ın's formula for intermediate extensions**

Let S be a closed symmetric relation in a Hilbert space H and assume that {G, Γ0, Γ1} is a boundary triplet for S∗. According to Theorem 2.1.3, the mapping Γ = (Γ0, Γ1) induces a bijective correspondence between the set of (closed) intermediate extensions A<sup>Θ</sup> of S and the set of (closed) relations Θ in G, via

$$\Theta \mapsto A\_{\Theta} = \left\{ \dot{f} \in S^\* \, : \, \Gamma \dot{f} \in \Theta \right\} = \ker \left( \Gamma\_1 - \Theta \Gamma\_0 \right);$$

and <sup>A</sup><sup>0</sup> = ker Γ<sup>0</sup> corresponds to Θ = {0}×G. For <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) the relation (AΘ−λ)−<sup>1</sup> will be regarded as a perturbation of the resolvent of the self-adjoint extension A<sup>0</sup> of S. This fact is expressed by the formula provided in Theorem 2.6.1 and some variants under the additional assumption λ ∈ ρ(AΘ) are discussed afterwards. In the special case <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AΘ) one has (AΘ−λ)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) and the resolvent difference, and hence also the perturbation term, are bounded operators. Moreover, it is shown later how the different types of spectral points λ ∈ σ(AΘ) which are contained in ρ(A0) are related to the Weyl function and the parameter Θ. A more in-depth treatment of the connection of the spectrum and the Weyl function can be found in Chapter 3.

In the next theorem the difference of (A<sup>Θ</sup> <sup>−</sup>λ)−<sup>1</sup> and (A<sup>0</sup> <sup>−</sup>λ)−1, <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0), is expressed in a perturbation term which involves the Weyl function M and the parameter Θ. This results in a general version of Kre˘ın's formula for intermediate extensions.

**Theorem 2.6.1.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding γ-field and Weyl function, respectively. Moreover, let Θ be a closed relation in G and let

$$A\_{\Theta} = \left\{ \widehat{f} \in S^\* \, : \, \Gamma \widehat{f} \in \Theta \right\} \tag{2.6.1}$$

be the corresponding extension via (2.1.5). Then for all λ ∈ ρ(A0) one has the equality

$$(A\_{\Theta} - \lambda)^{-1} = (A\_0 - \lambda)^{-1} + \gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\*,\tag{2.6.2}$$

where the inverses in the first and the last term are taken in the sense of relations. Moreover, if <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) <sup>∩</sup> <sup>ρ</sup>(AΘ), then (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G) and (2.6.2) holds in the sense of bounded linear operators.

Proof. Assume that λ ∈ ρ(A0). In order to establish the identity (2.6.2) it must be shown that the relations on the left-hand side and right-hand side coincide.

First the inclusion (⊂) in (2.6.2) will be shown. For this purpose, consider {g, g- } ∈ (A<sup>Θ</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> so that, equivalently, <sup>g</sup><sup>Θ</sup> <sup>=</sup> {g- , g + λg- } ∈ AΘ. Moreover, denote

$$\widehat{g}\_0 = \left\{ (A\_0 - \lambda)^{-1} g, (I + \lambda (A\_0 - \lambda)^{-1}) g \right\} \in A\_0.$$

Then

$$
\widehat{g}\_{\Theta} - \widehat{g}\_0 = \left\{ g' - (A\_0 - \lambda)^{-1} g, \lambda (g' - (A\_0 - \lambda)^{-1} g) \right\},
$$

and hence <sup>g</sup><sup>Θ</sup> <sup>−</sup> <sup>g</sup><sup>0</sup> <sup>∈</sup> <sup>N</sup> <sup>λ</sup>(S∗). Since <sup>γ</sup>(λ) maps <sup>G</sup> onto <sup>N</sup> <sup>λ</sup>(S∗) there exists an element ϕ ∈ G such that

$$
\widehat{g}\_{\Theta} = \widehat{g}\_0 + \widehat{\gamma}(\lambda)\varphi. \tag{2.6.3}
$$

By Proposition 2.3.6 (ii) one has Γγ(λ)<sup>ϕ</sup> <sup>=</sup> {ϕ, M(λ)ϕ} and, moreover, Proposition 2.3.2 (iv) shows that Γg<sup>0</sup> <sup>=</sup> {0, γ(λ)∗g}. Since <sup>g</sup><sup>Θ</sup> <sup>∈</sup> <sup>A</sup>Θ, an application of Γ to (2.6.3) yields

$$\{0, \gamma(\overline{\lambda})^\* g\} + \{\varphi, M(\lambda)\varphi\} = \Gamma \widehat{g}\_0 + \Gamma \widehat{\gamma}(\lambda)\varphi = \Gamma \widehat{g}\_\Theta \in \Theta,$$

see (2.6.1). Thus, {ϕ, γ(λ)∗g + M(λ)ϕ} ∈ Θ and {ϕ, γ(λ)∗g} ∈ Θ − M(λ) or, equivalently, {g,ϕ} ∈ (Θ <sup>−</sup> <sup>M</sup>(λ))−1γ(λ)∗, which implies that

$$\{g, \gamma(\lambda)\varphi\} \in \gamma(\lambda)(\Theta - M(\lambda))^{-1}\gamma(\overline{\lambda})^\*.\tag{2.6.4}$$

Now consider the first component g- = (A<sup>0</sup> <sup>−</sup>λ)−1g+γ(λ)<sup>ϕ</sup> in the identity (2.6.3). Then one has

$$\{g, g'\} = \left\{g, (A\_0 - \lambda)^{-1} g + \gamma(\lambda)\varphi\right\},\tag{2.6.5}$$

and due to {g,(A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1g} ∈ (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and (2.6.4) it follows from (2.6.5) that

$$\{g, g'\} \in (A\_0 - \lambda)^{-1} + \gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\*,$$

and hence the inclusion (⊂) in (2.6.2) holds.

Next the inclusion (⊃) in (2.6.2) will be shown. For this purpose, let

$$\{g, g'\} \in (A\_0 - \lambda)^{-1} + \gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\*.$$

By the definition of the sum of relations, this means that

$$g' = \left(A\_0 - \lambda\right)^{-1} g + h,\quad \text{where}\quad \{g, h\} \in \gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\*.$$

Recall from Proposition 2.3.2 (i) that γ(λ) ∈ **B**(G, H), since λ ∈ ρ(A0). Hence, <sup>h</sup> <sup>=</sup> <sup>γ</sup>(λ)ϕ, where {γ(λ)∗g,ϕ} ∈ (Θ <sup>−</sup> <sup>M</sup>(λ))−1. Consequently, it is clear that {ϕ, γ(λ)∗g + M(λ)ϕ} ∈ Θ. Next observe that

$$\{g', g + \lambda g'\} = \left\{ (A\_0 - \lambda)^{-1} g, (I + \lambda(A\_0 - \lambda)^{-1}) g \right\} + \left\{ \gamma(\lambda)\varphi, \lambda \gamma(\lambda)\varphi \right\},$$

which implies that

$$\Gamma\{g', g + \lambda g'\} = \{0, \gamma(\overline{\lambda})^\* g\} + \{\varphi, M(\lambda)\varphi\} \in \Theta$$

or {g- , g + λg- } ∈ AΘ. Thus, {g, g- } ∈ (A<sup>Θ</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> and therefore

$$\left( (A\_0 - \lambda)^{-1} + \gamma(\lambda) \left( \Theta - M(\lambda) \right) \right)^{-1} \gamma(\overline{\lambda})^\* \subset (A\_\Theta - \lambda)^{-1}.$$

Hence, the inclusion (⊃) in (2.6.2) has been shown.

To prove the last assertion in the theorem, assume that λ ∈ ρ(A0)∩ρ(AΘ). It is first shown that ker (Θ−M(λ)) = {0}. To see this, let ϕ ∈ ker (Θ−M(λ)). Then clearly {ϕ, M(λ)ϕ} ∈ Θ. Define f <sup>λ</sup> <sup>=</sup> <sup>γ</sup>(λ)ϕ, so that <sup>f</sup> <sup>λ</sup> <sup>=</sup> {fλ, λfλ} ∈ <sup>N</sup> <sup>λ</sup>(S∗) and

$$\Gamma \widehat{f}\_{\lambda} = \{\Gamma\_0 \widehat{f}\_{\lambda}, \Gamma\_1 \widehat{f}\_{\lambda}\} = \{\Gamma\_0 \widehat{f}\_{\lambda}, M(\lambda)\Gamma\_0 \widehat{f}\_{\lambda}\} = \{\varphi, M(\lambda)\varphi\} \in \Theta.$$

Thus, f <sup>λ</sup> ∈ A<sup>Θ</sup> and f<sup>λ</sup> = γ(λ)ϕ ∈ ker (A<sup>Θ</sup> − λ). Since λ ∈ ρ(AΘ), one concludes that γ(λ)ϕ = 0 and ϕ = 0. Hence, ker (Θ − M(λ)) = {0}.

Next it is shown that (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G). Since <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0), the identity (2.6.2) holds and as it is assumed that <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AΘ), one has dom (A<sup>Θ</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>=</sup> <sup>H</sup>. Therefore,

$$\text{dom}\left[\left(\Theta - M(\lambda)\right)^{-1}\gamma(\bar{\lambda})^\*\right] = \text{dom}\left[\gamma(\lambda)\left(\Theta - M(\lambda)\right)^{-1}\gamma(\bar{\lambda})^\*\right] = \mathfrak{H},\tag{2.6.6}$$

where the first identity is clear since γ(λ) ∈ **B**(G, H). As ran γ(λ)<sup>∗</sup> = G, one concludes from (2.6.6) that dom (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> <sup>=</sup> <sup>G</sup>. By assumption, Θ is closed and then <sup>M</sup>(λ) <sup>∈</sup> **<sup>B</sup>**(G) implies that Θ <sup>−</sup> <sup>M</sup>(λ) is closed. Then (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> is a closed operator and by the closed graph theorem (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G). -

Assume that λ ∈ ρ(A0). In Theorem 2.6.1 it is shown that then λ ∈ ρ(AΘ) leads to (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G). In fact, there is a one-to-one correspondence between the part of the spectrum of A<sup>Θ</sup> contained in ρ(A0) and the spectrum of Θ − M(λ) contained in ρ(A0). The following theorem and its corollary are direct consequences of the Kre˘ın formula (2.6.2). A complete description of the spectrum of self-adjoint extensions A<sup>Θ</sup> in terms of the singularities of the function <sup>λ</sup> → (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> is given in Section 3.8.

**Theorem 2.6.2.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding γ-field and Weyl function, respectively. Moreover, let Θ be a closed relation in G and let

$$A\_{\Theta} = \{ \widehat{f} \in S^\* : \Gamma \widehat{f} \in \Theta \},$$

be the corresponding extension via (2.1.5). Then the following statements hold for all λ ∈ ρ(A0):

(i) λ ∈ σp(AΘ) ⇔ 0 ∈ σp(Θ − M(λ)), and in this case

$$\ker\left(A\_{\Theta} - \lambda\right) = \gamma(\lambda)\ker\left(\Theta - M(\lambda)\right);\tag{2.6.7}$$

(ii) λ ∈ σr(AΘ) ⇔ 0 ∈ σr(Θ − M(λ)); (iii) λ ∈ σc(AΘ) ⇔ 0 ∈ σc(Θ − M(λ)); (iv) λ ∈ ρ(AΘ) ⇔ 0 ∈ ρ(Θ − M(λ)).

Proof. Assume that λ ∈ ρ(A0) and consider the right-hand side of (2.6.2) as the sum of the operator (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) and the relation

$$\gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\*$$

in H. Hence, the domain of the right-hand side of (2.6.2) is given by

$$\operatorname{dom}\left[\gamma(\lambda)\left(\Theta - M(\lambda)\right)^{-1}\gamma(\overline{\lambda})^\*\right] = \operatorname{dom}\left[\left(\Theta - M(\lambda)\right)^{-1}\gamma(\overline{\lambda})^\*\right],$$

where it was used that γ(λ) ∈ **B**(G, H). Thus, it follows from (2.6.2) that

$$\operatorname{dom}\left(A\_{\Theta}-\lambda\right)^{-1}=\operatorname{dom}\left(\Theta-M(\lambda)\right)^{-1}\gamma(\overline{\lambda})^{\*}.\tag{2.6.8}$$

Due to the definition of the sum of relations and mul (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>=</sup> {0} the multivalued part of the right-hand side of (2.6.2) is given by

$$\begin{aligned} \text{mult}\left[\gamma(\lambda)\big(\Theta - M(\lambda)\big)^{-1}\gamma(\overline{\lambda})^\*\right] &= \text{mult}\left[\gamma(\lambda)\big(\Theta - M(\lambda)\big)^{-1}\right] \\ &= \gamma(\lambda)\text{mult}\left(\Theta - M(\lambda)\right)^{-1}. \end{aligned}$$

Thus, it follows from (2.6.2) that

$$\operatorname{mult}\left(A\_{\Theta}-\lambda\right)^{-1}=\gamma(\lambda)\operatorname{mult}\left(\Theta-M(\lambda)\right)^{-1}.\tag{2.6.9}$$

The proof of the theorem is based on the identities (2.6.8) and (2.6.9).

For the interpretation of (2.6.8) recall that γ(λ), λ ∈ ρ(A0), maps G isomorphically onto Nλ(S∗); see Proposition 2.3.2 (i). This implies that the restriction

> γ(λ) <sup>∗</sup> : Nλ(S∗) → G is an isomorphism.

In particular, γ(λ)<sup>∗</sup> is a bijection between closed or dense subspaces in Nλ(S∗) and closed or dense subspaces in G, respectively. Now assume that V is a closed relation in G. Since ker γ(λ)<sup>∗</sup> = (Nλ(S∗))⊥, it follows that

$$\text{dom}\,V\gamma(\overline{\lambda})^\* \text{ is closed in } \mathfrak{H} \quad \Leftrightarrow \quad \text{dom}\,V \text{ is closed in } \mathfrak{G},\tag{2.6.10}$$

and

$$\text{dom}\,V\gamma(\overline{\lambda})^\* \text{ is dense in } \mathfrak{H} \quad \Leftrightarrow \quad \text{dom}\,V \text{ is dense in } \mathfrak{G}.\tag{2.6.11}$$

(i) The identity (2.6.7) follows from (2.6.9). It is clear that (2.6.7) implies the equivalence λ ∈ σp(AΘ) ⇔ 0 ∈ σp(Θ − M(λ)).

(iii) It follows from (2.6.8) that ran (A<sup>Θ</sup> − λ) is a dense nonclosed subspace of <sup>H</sup> if and only if dom (Θ <sup>−</sup> <sup>M</sup>(λ))−1γ(λ)<sup>∗</sup> is a dense nonclosed subspace of <sup>H</sup>. By (2.6.10) and (2.6.11) with <sup>V</sup> = (Θ−M(λ))−1, this is equivalent to ran (Θ−M(λ)) being a dense nonclosed subspace of G. In addition, it follows from (i) that A<sup>Θ</sup> −λ is injective if and only if Θ − M(λ) is injective. This proves the assertion.

(iv) The implication (⇒) holds by Theorem 2.6.1. The implication (⇐) is easy to see. Indeed, assume that 0 <sup>∈</sup> <sup>ρ</sup>(Θ <sup>−</sup> <sup>M</sup>(λ)). Then (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G) and since γ(λ) ∈ **B**(G, H) and γ(λ)<sup>∗</sup> ∈ **B**(H, G) for λ ∈ ρ(A0), one concludes from (2.6.2) that (A<sup>Θ</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H), i.e., <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AΘ).

(ii) This assertion is a consequence of (i), (iii), and (iv). -

The Kre˘ın formula (2.6.2) was formulated above in terms of the closed relation Θ in the Hilbert space G. Now the form of the Kre˘ın formula will be given when a tight parametric representation of Θ is chosen; cf. Section 1.10.

**Corollary 2.6.3.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding γfield and Weyl function, respectively. Let the closed relation Θ have the parametric representation

$$\Theta = \left\{ \{ \mathcal{A}e, \mathcal{B}e \} : e \in \mathcal{E} \right\}, \tag{2.6.12}$$

where E is a Hilbert space and A, B ∈ **B**(E, G), and assume that this representation of Θ is tight, i.e., ker A ∩ ker B = {0} holds. Then for all λ ∈ ρ(A0) one has

$$
\lambda \in \rho(A\_{\Theta}) \quad \Leftrightarrow \quad \left(\mathcal{B} - M(\lambda)\mathcal{A}\right)^{-1} \in \mathbf{B}(\mathcal{G}, \mathcal{E}),
$$

and in this case

$$(A\_{\Theta} - \lambda)^{-1} = (A\_0 - \lambda)^{-1} + \gamma(\lambda)\mathcal{A}\left(\mathcal{B} - M(\lambda)\mathcal{A}\right)^{-1}\gamma(\overline{\lambda})^\*.\tag{2.6.13}$$

Proof. According to Theorem 2.6.2, for λ ∈ ρ(A0) one has

$$
\lambda \in \rho(A\_{\Theta}) \quad \Leftrightarrow \quad 0 \in \rho(\Theta - M(\lambda)),
$$

that is, <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AΘ) if and only if (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G). Due to the tightness of the representation (2.6.12), Lemma 1.11.6 shows that

$$\left(\left(\Theta - M(\lambda)\right)^{-1}\in \mathbf{B}(\mathcal{G})\right.\quad\Leftrightarrow\quad\left(\mathcal{B} - M(\lambda)\mathcal{A}\right)^{-1}\in \mathbf{B}(\mathcal{G},\mathcal{E}),$$

as for all λ ∈ ρ(A0) one has that M(λ) ∈ **B**(G). In this case it follows that (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> <sup>=</sup> <sup>A</sup>(<sup>B</sup> <sup>−</sup> <sup>M</sup>(λ)A)−1. Furthermore, the resolvent formula (2.6.13) follows from (2.6.2). -

Let again Θ be a closed relation in G and assume, in the same way as at the end of Section 2.2, that Θ admits an orthogonal decomposition

$$
\Theta = \Theta\_{\text{op}} \oplus \Theta\_{\text{mul}}, \qquad \mathcal{G} = \mathcal{G}\_{\text{op}} \oplus \mathcal{G}\_{\text{mul}}, \tag{2.6.14}
$$

into a (not necessarily densely defined) operator part Θop acting in the Hilbert space Gop = dom Θ<sup>∗</sup> = (mul Θ)<sup>⊥</sup> and a multivalued part Θmul = {0} × mul Θ in the Hilbert space Gmul = mul Θ; cf. Section 1.3. Recall that, in particular, closed symmetric, self-adjoint, (maximal) dissipative, and (maximal) accumulative relations Θ in G admit such a decomposition.

$$\square$$

**Corollary 2.6.4.** Assume that the closed relation Θ in Theorem 2.6.1 has the orthogonal decomposition (2.6.14), let Pop be the orthogonal projection onto Gop, and denote the canonical embedding of Gop into G by ιop. Let

$$A\_{\Theta} = \left\{ \ddot{f} \in S^\* : \Gamma \dot{f} \in \Theta \right\}.$$

be the intermediate extension of S via (2.2.12). Then for all λ ∈ ρ(A0) ∩ ρ(AΘ) one has (Θop <sup>−</sup> <sup>P</sup>opM(λ)ιop)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(Gop) and

$$(A\_{\Theta} - \lambda)^{-1} = (A\_0 - \lambda)^{-1} + \gamma(\lambda)\iota\_{\text{op}}\left(\Theta\_{\text{op}} - P\_{\text{op}}M(\lambda)\iota\_{\text{op}}\right)^{-1}P\_{\text{op}}\gamma(\overline{\lambda})^\*.$$

Proof. In view of (2.6.14), one sees that for λ ∈ ρ(A0) ∩ ρ(AΘ)

$$(\Theta - M(\lambda)) = \left\{ \left\{ \begin{pmatrix} \varphi \\ 0 \end{pmatrix}, \begin{pmatrix} \Theta\_{\mathrm{op}}\varphi - P\_{\mathrm{op}}M(\lambda)\iota\_{\mathrm{op}}\varphi \\ \psi - (I - P\_{\mathrm{op}})M(\lambda)\iota\_{\mathrm{op}}\varphi \end{pmatrix} \right\} : \begin{aligned} \varphi \in \mathrm{dom}\,\Theta\_{\mathrm{op}}, \\ \psi \in \mathcal{G}\_{\mathrm{mul}} \end{aligned} \right\},$$

and hence

$$\left(\Theta - M(\lambda)\right)^{-1} = \left\{ \left\{ \begin{pmatrix} \Theta\_{\mathrm{op}}\varphi - P\_{\mathrm{op}}M(\lambda)\iota\_{\mathrm{op}}\varphi\\ \chi \end{pmatrix}, \begin{pmatrix} \varphi\\ 0 \end{pmatrix} \right\} : \begin{array}{l} \varphi \in \mathrm{dom}\,\Theta\_{\mathrm{op}},\\ \chi \in \mathcal{G}\_{\mathrm{mul}} \end{array} \right\}.$$

Since (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G), one has

$$\ker\left(\Theta\_{\mathrm{op}} - P\_{\mathrm{op}}M(\lambda)\iota\_{\mathrm{op}}\right) = \{0\} \quad \text{and} \quad \mathrm{ran}\left(\Theta\_{\mathrm{op}} - P\_{\mathrm{op}}M(\lambda)\iota\_{\mathrm{op}}\right) = \mathcal{G}\_{\mathrm{op}}.$$

This shows that Θop − PopM(λ)ιop is a bijective closed operator in Gop. Hence, (Θop <sup>−</sup> <sup>P</sup>opM(λ)ιop)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(Gop) and so

$$\left(\Theta - M(\lambda)\right)^{-1} = \begin{pmatrix} (\Theta\_{\mathrm{op}} - P\_{\mathrm{op}}M(\lambda)\iota\_{\mathrm{op}})^{-1} & 0\\ 0 & 0 \end{pmatrix}$$

with respect to the decomposition (2.6.14). Now the identity

$$\left(\Theta - M(\lambda)\right)^{-1} = \iota\_{\rm op} \left(\Theta\_{\rm op} - P\_{\rm op}M(\lambda)\iota\_{\rm op}\right)^{-1} P\_{\rm op}$$

is an immediate consequence. This together with Theorem 2.6.1 implies the statement. -

If the closed relation Θ in G admits a decomposition of the form

$$
\Theta = \Theta' \oplus (\{0\} \times \mathcal{G}'') \tag{2.6.15}
$$

as in Corollary 2.5.13, where Θ is a closed relation in the Hilbert space G and G = G- ⊕G--, then Kre˘ın's formula can also be interpreted in the context of the intermediate symmetric extension S of S in Proposition 2.5.12 and the corresponding restriction of the boundary triplet {G, Γ0, Γ1}. More precisely, if Θ is of the form (2.6.15) and S and the boundary triplet {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} are as in Proposition 2.5.12 with corresponding γ-field γ and Weyl function M- , and

$$A\_{\Theta'} = \left\{ \widehat{f} \in (S')^\* : \Gamma' \widehat{f} \in \Theta' \right\} = \ker \left( \Gamma'\_1 - \Theta' \Gamma'\_0 \right),$$

then A<sup>Θ</sup>- = ker (Γ<sup>1</sup> − ΘΓ0) = A<sup>Θ</sup> and A- <sup>0</sup> = ker Γ- <sup>0</sup> = ker Γ<sup>0</sup> = A<sup>0</sup> hold by Corollary 2.5.13 and Proposition 2.5.12, respectively. Moreover, by Theorem 2.6.1 one has (Θ- − M- (λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G- ) for all λ ∈ ρ(A<sup>Θ</sup>- ) ∩ ρ(A- <sup>0</sup>) and

$$(A\_{\Theta'} - \lambda)^{-1} = (A\_0' - \lambda)^{-1} + \gamma'(\lambda) \left(\Theta' - M'(\lambda)\right)^{-1} \gamma'(\overline{\lambda})^\*.$$

In the special case where Θ- = Θop and {0} × G--= Θmul as in (2.6.14) one has

$$M'(\lambda) = P\_{\rm op}M(\lambda)\iota\_{\rm op} \quad \text{and} \quad \gamma'(\lambda) = \gamma(\lambda)\iota\_{\rm op},$$

so that Kre˘ın's formula in Corollary 2.6.4 can be rewritten in the form

$$(A\_{\Theta\_{\mathrm{op}}} - \lambda)^{-1} = (A\_0' - \lambda)^{-1} + \gamma'(\lambda) \left(\Theta\_{\mathrm{op}} - M'(\lambda)\right)^{-1} \gamma'(\overline{\lambda})^\* \dots$$

The behavior of Kre˘ın's formula under transformations of boundary triplets will be discussed next. To this end suppose that S is a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding γ-field and Weyl function, respectively. Consider a closed extension

$$A\_{\Theta} = \left\{ \widehat{f} \in S^\* \, : \, \Gamma \widehat{f} \in \Theta \right\} = \ker \left( \Gamma\_1 - \Theta \Gamma\_0 \right).$$

corresponding to a closed relation Θ in G. Then for all λ ∈ ρ(AΘ) ∩ ρ(A0) one has

$$(A\_{\Theta} - \lambda)^{-1} = (A\_0 - \lambda)^{-1} + \gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\*$$

according to Theorem 2.6.1. Let G be a further Hilbert space and assume that W ∈ **B**(G × G, G- × G- ) satisfies the identities in (2.5.1). Let {G- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} be the corresponding transformed boundary triplet in (2.5.2) with γ-field γ and Weyl function M specified in Proposition 2.5.3. Let A- <sup>0</sup> = ker Γ- <sup>0</sup> and define the closed relation Θ in G by Θ-= W[Θ]; cf. (2.5.4). By Proposition 2.5.2, one has

$$A\_{\Theta} = \ker\left(\Gamma\_1 - \Theta \Gamma\_0\right) = \ker\left(\Gamma\_1' - \Theta' \Gamma\_0'\right) = A\_{\Theta'}',$$

and hence for all λ ∈ ρ(AΘ) ∩ ρ(A- <sup>0</sup>) Kre˘ın's formula in Theorem 2.6.1 has the form

$$(A\_{\Theta} - \lambda)^{-1} = (A\_0' - \lambda)^{-1} + \gamma'(\lambda) \left(\Theta' - M'(\lambda)\right)^{-1} \gamma'(\overline{\lambda})^\* = (A\_{\Theta'}' - \lambda)^{-1}.$$

In this sense Kre˘ın's formula is invariant under transformations of boundary triplets.

Next, Theorem 2.6.2 will be complemented for the case where the extensions are self-adjoint. Recall from Section 1.5 that for a self-adjoint relation H a spectral point <sup>λ</sup> <sup>∈</sup> <sup>R</sup> belongs to the discrete spectrum <sup>σ</sup>d(H) if <sup>λ</sup> is an eigenvalue with finite multiplicity which is an isolated point in σ(H). It will be used that λ ∈ σd(H) if and only if

$$\dim \ker \left( H - \lambda \right) < \infty \quad \text{and} \quad \text{ran} \left( H - \lambda \right) = \overline{\text{ran}} \left( H - \lambda \right). \tag{2.6.16}$$

The complement of the discrete spectrum of H in σ(H) is the essential spectrum, denoted by σess(H).

**Theorem 2.6.5.** Let S be a closed symmetric relation, let {G, Γ0, Γ1} be a boundary triplet for S∗, A<sup>0</sup> = ker Γ0, and let M be the corresponding Weyl function. Let Θ be a self-adjoint relation in G and let

$$A\_{\Theta} = \{ \widehat{f} \in S^\* : \Gamma \widehat{f} \in \Theta \},$$

be the corresponding self-adjoint extension via (2.1.5). Then the following statements hold for all λ ∈ ρ(A0):


Proof. Here one relies on the observations made in the proof of Theorem 2.6.2. Assume that λ ∈ ρ(A0).

(i) It follows from Theorem 2.6.2 (i) that

$$0 < \dim \ker \left( A\_{\Theta} - \lambda \right) < \infty \quad \Leftrightarrow \quad 0 < \dim \ker \left( \Theta - M(\lambda) \right) < \infty,$$

and it follows from (2.6.8) and (2.6.10) with <sup>V</sup> = (Θ <sup>−</sup> <sup>M</sup>(λ))−<sup>1</sup> that

ran (A<sup>Θ</sup> − λ) closed ⇔ ran (Θ − M(λ)) closed.

Now assertion (i) is a consequence of the above equivalences and the characterization (2.6.16) of discrete eigenvalues of self-adjoint relations.

(ii) Note that λ ∈ σ(AΘ) if and only if 0 ∈ σ(Θ − M(λ)) by Theorem 2.6.2. Hence, this assertion is a consequence of item (i), σess(AΘ) = σ(AΘ) \ σd(AΘ), and <sup>σ</sup>ess(Θ <sup>−</sup> <sup>M</sup>(λ)) = <sup>σ</sup>(Θ <sup>−</sup> <sup>M</sup>(λ)) \ <sup>σ</sup>d(Θ <sup>−</sup> <sup>M</sup>(λ)). -

## **2.7 Kre˘ın's formula for exit space extensions**

The Kre˘ın formula in Theorem 2.6.1 holds for intermediate extensions of a symmetric relation S in a Hilbert space H. In particular, these intermediate extensions contain maximal dissipative, maximal accumulative, and self-adjoint extensions. Now consider larger Hilbert spaces K which contain H as a closed subspace and self-adjoint relations <sup>A</sup> in <sup>K</sup> which extend <sup>S</sup> as studied by Kre˘ın and Na˘ımark. It will be shown that such self-adjoint extensions induce families of relations in H which also extend S. For these families of relations there is a version of the Kre˘ın formula, which will also be called Kre˘ın–Na˘ımark formula in this text.

The following notions are useful. Let H and H be Hilbert spaces and let <sup>A</sup> be a self-adjoint relation in the Hilbert space H ⊕ H- . The Straus family <sup>ˇ</sup> <sup>T</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, in <sup>H</sup> corresponding to the self-adjoint relation <sup>A</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup>is defined by

$$T(\lambda) = \left\{ \{f, f'\} \in \mathfrak{H} \times \mathfrak{H} : \left\{ \begin{pmatrix} f \\ h \end{pmatrix}, \begin{pmatrix} f' \\ h' \end{pmatrix} \right\} \in \tilde{A}, h' = \lambda h \right\}.\tag{2.7.1}$$

Here a vector notation is used for the elements

$$
\begin{pmatrix} f \\ h \end{pmatrix} \in \text{dom}\,\tilde{A} \subset \mathfrak{H} \oplus \mathfrak{H}' \quad \text{and} \quad \begin{pmatrix} f' \\ h' \end{pmatrix} \in \text{ran}\,\tilde{A} \subset \mathfrak{H} \oplus \mathfrak{H}',
$$

where f,f- ∈ H and h, h- ∈ H- . This notation will be frequently used in the rest of this section. Closely associated with the Straus family ˇ T(λ) is the compressed resolvent <sup>R</sup>(λ) <sup>∈</sup> **<sup>B</sup>**(H) of the self-adjoint relation <sup>A</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup>- , defined by

$$R(\lambda) = P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}}, \quad \lambda \in \rho(\tilde{A});\tag{2.7.2}$$

here P<sup>H</sup> : H ⊕ H- → H denotes the orthogonal projection from H ⊕ H onto H and ι<sup>H</sup> : H → H ⊕ H is the canonical embedding of H into H ⊕ H- .

**Lemma 2.7.1.** Let <sup>A</sup> be a self-adjoint relation in <sup>H</sup> <sup>⊕</sup> <sup>H</sup>- . Then the following statements holds for the Straus family ˇ T(λ) in (2.7.1):


Moreover, the following statements hold for the compressed resolvent R(λ) ∈ **B**(H) in (2.7.2):


$$\frac{\operatorname{Im} R(\lambda)}{\operatorname{Im} \lambda} - R(\lambda) R(\lambda)^\* \ge 0. \tag{2.7.3}$$

Furthermore, the Straus family ˇ T(λ) in (2.7.1) and the compressed resolvent in (2.7.2) are related via

$$R(\lambda) = (T(\lambda) - \lambda)^{-1}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{2.7.4}$$

Proof. It is clear from (2.7.1) that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> the Straus family <sup>ˇ</sup> <sup>T</sup>(λ) satisfies

$$(T(\lambda) - \lambda)^{-1} = \left\{ \{f' - \lambda f, f\} \in \mathfrak{H} \times \mathfrak{H} : \left\{ \begin{pmatrix} f \\ h \end{pmatrix}, \begin{pmatrix} f' \\ h' \end{pmatrix} \right\} \in \widetilde{A}, h' = \lambda h \right\}.\square$$

On the other hand, it is clear that

$$(\tilde{A} - \lambda)^{-1} = \left\{ \left\{ \begin{pmatrix} f' - \lambda f \\ h' - \lambda h \end{pmatrix}, \begin{pmatrix} f \\ h \end{pmatrix} \right\} : \left\{ \begin{pmatrix} f \\ h \end{pmatrix}, \begin{pmatrix} f' \\ h' \end{pmatrix} \right\} \in \tilde{A} \right\},$$

so that the compressed resolvent of <sup>A</sup> in (2.7.2) is given for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A) by

$$\begin{aligned} R(\lambda) &= P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}} \\ &= \left\{ \{f' - \lambda f, f\} \in \mathfrak{H} \times \mathfrak{H} : \left\{ \begin{pmatrix} f \\ h \end{pmatrix}, \begin{pmatrix} f' \\ h' \end{pmatrix} \right\} \in \tilde{A}, h' = \lambda h \right\}. \end{aligned}$$

Comparison of the above identities shows that (2.7.4) holds.

(iv) & (v) follow immediately from (2.7.2).

(i) Let {f,f- } ∈ T(λ). Then there exists a pair {h, h- } such that

$$\left\{ \begin{pmatrix} f \\ h \end{pmatrix}, \begin{pmatrix} f' \\ h' \end{pmatrix} \right\} \in \tilde{A} \quad \text{and} \quad h' = \lambda h.$$

Since <sup>A</sup> is self-adjoint, it follows that

$$0 = \operatorname{Im}\left( (f',f) + (h',h) \right) = \operatorname{Im}\left( f',f \right) + (\operatorname{Im}\lambda)(h,h)$$

and this implies that <sup>T</sup>(λ) is accumulative (dissipative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−). The maximality follows from (2.7.4) since ran (T(λ) − λ) = H; cf. Theorem 1.6.4.

(ii) It follows from (v) and (2.7.4) that

$$(T(\lambda) - \lambda)^{-1} = (T(\overline{\lambda}) - \overline{\lambda})^{-\*},$$

and this implies T(λ) = T(λ)∗.

(iii) This is clear from (iv) and (2.7.4).

(vi) Let ϕ ∈ H and ϕ- = R(λ)ϕ. Then (2.7.4) implies {ϕ- , ϕ + λϕ- } ∈ T(λ). For <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> the relation <sup>T</sup>(λ) is maximal accumulative and hence (v) implies

$$\begin{aligned} 0 \ge \operatorname{Im} \left( \varphi + \lambda \varphi', \varphi' \right) &= \operatorname{Im} \left( \varphi, R(\lambda)\varphi \right) + \left( \operatorname{Im} \lambda \right) \left\| R(\lambda)\varphi \right\|^2 \\ &= \operatorname{Im} \left( R(\overline{\lambda})\varphi, \varphi \right) - \left( \operatorname{Im} \overline{\lambda} \right) \left\| R(\overline{\lambda})^\* \varphi \right\|^2. \end{aligned}$$

Thus, it follows for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup> that

$$\frac{\operatorname{Im}\left(R(\bar{\lambda})\varphi,\varphi\right)}{\operatorname{Im}\overline{\lambda}} - \left(R(\overline{\lambda})R(\overline{\lambda})^\*\varphi,\varphi\right) \ge 0,$$

which implies (2.7.3) on C−. A similar reasoning leads to (2.7.3) on C+. -

In Chapter 4 it will be shown how the properties of the Straus family and the ˇ compressed resolvent in Lemma 2.7.1 determine the space H and the self-adjoint relation <sup>A</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup>- .

In the present context the Straus family and the compressed resolvent appear ˇ when one considers self-adjoint extensions of a closed symmetric relation S in a Hilbert spaces H. Let the Hilbert space H ⊕ H be an extension of H, where the Hilbert space H is an exit space. Assume that the self-adjoint relation <sup>A</sup> in <sup>H</sup>⊕H- is an extension of the symmetric relation <sup>S</sup> in <sup>H</sup>, i.e., <sup>S</sup> <sup>⊂</sup> <sup>A</sup>. The Straus family <sup>ˇ</sup> and the compressed resolvent of <sup>A</sup> consist of relations in the closed subspace <sup>H</sup> that extend S in the following sense.

**Proposition 2.7.2.** Let <sup>S</sup> be a closed symmetric relation in <sup>H</sup> and let <sup>A</sup> be a selfadjoint extension of S in H ⊕ H- . Then the Straus family ˇ T(λ) in (2.7.1) satisfies

$$S \subset T(\lambda) \subset S^\*, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \tag{2.7.5}$$

and the compressed resolvent R(λ) in (2.7.2) satisfies

$$R(\lambda) \upharpoonright\_{\text{ran}\,(S-\lambda)} = (S-\lambda)^{-1}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.\tag{2.7.6}$$

Proof. In order to prove (2.7.5), let {f,f- } ∈ S. Then one has

$$\left\{ \begin{pmatrix} f \\ 0 \end{pmatrix}, \begin{pmatrix} f' \\ 0 \end{pmatrix} \right\} \in \tilde{A},$$

and hence {f,f- } ∈ <sup>T</sup>(λ) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. This shows <sup>S</sup> <sup>⊂</sup> <sup>T</sup>(λ) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and making use of Lemma 2.7.1 (ii) one concludes that T(λ) = T(λ)<sup>∗</sup> ⊂ S∗. The identity (2.7.6) follows from the inclusion <sup>S</sup> <sup>⊂</sup> <sup>T</sup>(λ) and (2.7.4). -

Assume in the context of Proposition 2.7.2, that {G, Γ0, Γ1} is a boundary triplet for <sup>S</sup>∗. Then each relation <sup>T</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, in (2.7.5), being an intermediate extension of S, can be described by the relation Γ(T(λ)) in the parameter space G. It follows from Lemma 2.7.1 and Proposition 1.12.6 that the family −T(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is a Nevanlinna family in <sup>H</sup> in the sense of Definition 1.12.1. Note that for the holomorphy condition in Definition 1.12.1 it is necessary to apply Proposition 1.12.6. A similar reasoning will also be used in the proof of the next theorem, which relates a Nevanlinna family in G to the Straus family ˇ T(λ).

**Theorem 2.7.3.** Let S be a closed symmetric relation in H and let {G, Γ0, Γ1} be a boundary triplet for <sup>S</sup>∗. Let <sup>A</sup> be a self-adjoint extension of <sup>S</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup> and let <sup>T</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, be the corresponding Straus family in <sup>ˇ</sup> (2.7.1). Then

$$T(\lambda) = \left\{ \dot{f} \in S^\* \, : \, \Gamma \dot{f} \in -\tau(\lambda) \right\} = \ker \left( \Gamma\_1 + \tau(\lambda) \Gamma\_0 \right), \tag{2.7.7}$$

where <sup>τ</sup> (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is a Nevanlinna family in <sup>G</sup>.

Proof. It follows from Lemma 2.7.1 and Proposition 2.7.2 that T(λ) is a closed extension of <sup>S</sup> for each <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. According to Theorem 2.1.3, the extension <sup>T</sup>(λ) of S can be written in the form (2.7.7), where τ (λ) = −Γ(T(λ)).

It will be shown that <sup>τ</sup> (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is a Nevanlinna family in <sup>G</sup>. Since <sup>T</sup>(λ) is maximal accumulative (maximal dissipative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−) it follows from Corollary 2.1.4 (ii) that τ (λ) is maximal dissipative (maximal accumulative) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> (<sup>λ</sup> <sup>∈</sup> <sup>C</sup>−). The property <sup>T</sup>(λ) = <sup>T</sup>(λ)<sup>∗</sup> and Theorem 2.1.3 imply τ (λ) = τ (λ)∗. Denote the γ-field and the Weyl function corresponding to the boundary triplet {G, <sup>Γ</sup>0, <sup>Γ</sup>1} by <sup>γ</sup> and <sup>M</sup>. Then for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>±</sup> and <sup>μ</sup> <sup>∈</sup> <sup>C</sup><sup>±</sup> it follows from Lemma 1.11.5 that (−<sup>τ</sup> (λ) <sup>−</sup> <sup>M</sup>(μ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G) and

$$\left(T(\lambda) - \mu\right)^{-1} = \left(A\_0 - \mu\right)^{-1} - \gamma(\mu)\left(\tau(\lambda) + M(\mu)\right)^{-1}\gamma(\overline{\mu})^\* \tag{2.7.8}$$

holds by Theorem 2.6.1. According to Lemma 2.7.1, the mapping <sup>λ</sup> → (T(λ)−λ)−<sup>1</sup> is holomorphic and hence by Proposition 1.12.6 it follows that also the mapping <sup>λ</sup> → (T(λ) <sup>−</sup> <sup>μ</sup>)−<sup>1</sup> is holomorphic. Now (2.7.8) shows that <sup>λ</sup> → (<sup>τ</sup> (λ) + <sup>M</sup>(μ))−<sup>1</sup> is holomorphic and another application of Proposition 1.12.6 finally gives that <sup>λ</sup> → (<sup>τ</sup> (λ) + <sup>μ</sup>)−<sup>1</sup> is also holomorphic. Therefore, <sup>τ</sup> (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is a Nevanlinna family. -

The Kre˘ın–Na˘ımark formula in the following theorem is now an immediate consequence.

**Theorem 2.7.4.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding γfield and Weyl function, respectively. Let <sup>A</sup> be a self-adjoint extension of <sup>S</sup> in H⊕H- . Then with the Nevanlinna family τ in G from Theorem 2.7.3 the compressed resolvent <sup>R</sup>(λ) in (2.7.2) of <sup>A</sup> is given by the Kre˘ın–Na˘ımark formula

$$R(\lambda) = (A\_0 - \lambda)^{-1} - \gamma(\lambda) \left( M(\lambda) + \tau(\lambda) \right)^{-1} \gamma(\overline{\lambda})^\*, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{2.7.9}$$

Proof. As in the proof of Theorem 2.7.3, it follows from Theorem 2.6.1 that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one has

$$(T(\lambda) - \lambda)^{-1} = (A\_0 - \lambda)^{-1} - \gamma(\lambda) \left(\tau(\lambda) + M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\*.$$

Hence, the formula (2.7.9) follows from (2.7.4). -

In Chapter 4 the converse of Theorem 2.7.4 will be proved: for every Nevanlinna family in <sup>G</sup> there exists a self-adjoint exit space extension <sup>A</sup> of <sup>S</sup> such that (2.7.9) holds for the compressed resolvent of <sup>A</sup>.

Just as in the case of Corollary 2.6.3, there is now a formulation of the Kre˘ın formula for exit space extensions in terms of a parametric representation of the Nevanlinna family τ . Assume that the Nevanlinna pair {A, B} is a tight representation of the Nevanlinna family τ ; cf. Section 1.12. Then the next corollary can be shown in the same way as Corollary 2.6.3 by applying Proposition 1.12.6.

**Corollary 2.7.5.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding γ-field and Weyl function, respectively. Let the Nevanlinna family τ in Theorem 2.7.4

$$\square$$

have the tight representation τ = {A, B} with the Nevanlinna pair {A, B}. Then the compressed resolvent <sup>R</sup>(λ) of <sup>A</sup> has the form

$$R(\lambda) = (A\_0 - \lambda)^{-1} - \gamma(\lambda)\mathcal{A}(\lambda)\left(\mathcal{B}(\lambda) + M(\lambda)\mathcal{A}(\lambda)\right)^{-1}\gamma(\overline{\lambda})^\*, \quad \lambda \in \mathbb{C} \nmid \mathbb{R}.$$

The Straus family in Theorem ˇ 2.7.3 can also be described in terms of a representing Nevanlinna pair {A, B}.

**Corollary 2.7.6.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for <sup>S</sup>∗, and let <sup>T</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, be the Straus family in Theo- <sup>ˇ</sup> rem 2.7.3. Let the corresponding Nevanlinna family τ have the tight representation τ = {A, B} with the Nevanlinna pair {A, B}. Then

$$T(\lambda) = \left\{ \widehat{f} \in S^\* \, : \, \mathcal{B}(\overline{\lambda})^\* \Gamma\_0 \widehat{f} = -\mathcal{A}(\overline{\lambda})^\* \Gamma\_1 \widehat{f} \right\}. \tag{2.7.10}$$

Proof. By assumption <sup>τ</sup> (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is given as

$$\tau(\lambda) = \left\{ \{ \mathcal{A}(\lambda)\varphi, \mathcal{B}(\lambda)\varphi \} \, : \, \varphi \in \mathcal{G} \right\},$$

with a Nevanlinna pair {A, B} and this representation is tight. The symmetry property τ (λ)<sup>∗</sup> = τ (λ) implies that

$$\tau(\lambda)^\* = \{ \{ \mathcal{A}(\overline{\lambda})\varphi, \mathcal{B}(\overline{\lambda})\varphi \} : \varphi \in \mathcal{G} \},$$

so that τ (λ) can also be written as

$$\tau(\lambda) = \left\{ \{\varphi, \varphi'\} \in \mathfrak{G}^2 : \mathcal{B}(\overline{\lambda})^\* \varphi = \mathcal{A}(\overline{\lambda})^\* \varphi' \right\},$$

and hence

$$-\tau(\lambda) = \{ \{ \varphi, \varphi' \} \in \mathfrak{G}^2 : \mathcal{B}(\overline{\lambda})^\* \varphi = -\mathcal{A}(\overline{\lambda})^\* \varphi' \};$$

cf. (2.2.3) and (2.2.4). Thus, (2.7.10) follows from (2.7.7). -

In the following a particular self-adjoint exit space extension of S will be studied. Here the exit space is the parameter space G.

**Proposition 2.7.7.** Let S be a closed symmetric relation in H and let {G, Γ0, Γ1} be a boundary triplet for S∗. Then

$$\tilde{A} = \left\{ \left\{ \begin{pmatrix} f \\ \Gamma\_0 \hat{f} \end{pmatrix}, \begin{pmatrix} f' \\ -\Gamma\_1 \hat{f} \end{pmatrix} \right\} : \hat{f} = \{f, f'\} \in S^\* \right\} \tag{2.7.11}$$

is a self-adjoint extension of <sup>S</sup> in <sup>H</sup> <sup>⊕</sup> <sup>G</sup>. The corresponding Straus family <sup>ˇ</sup> <sup>T</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, in <sup>H</sup> has the form

$$T(\lambda) = \left\{ \widehat{f} \in S^\* : -\Gamma\_1 \widehat{f} = \lambda \Gamma\_0 \widehat{f} \right\} = \ker \left( \Gamma\_1 + \lambda \Gamma\_0 \right) \tag{2.7.12}$$

and the compressed resolvent R(λ) onto H is given by

$$R(\lambda) = (A\_0 - \lambda)^{-1} - \gamma(\lambda) \left( M(\lambda) + \lambda \right)^{-1} \gamma(\overline{\lambda})^\*.$$

Proof. Observe that <sup>S</sup> <sup>⊂</sup> <sup>A</sup>. Indeed, for <sup>f</sup> <sup>=</sup> {f,f- } ∈ S one has Γ0f = Γ1<sup>f</sup> = 0 by Proposition 2.1.2 (ii) and hence

$$\left\{ \begin{pmatrix} f \\ 0 \end{pmatrix}, \begin{pmatrix} f' \\ 0 \end{pmatrix} \right\} \in \tilde{A}.$$

It follows from the abstract Green identity (2.1.1) and the definition of <sup>A</sup> in (2.7.11) that the relation <sup>A</sup> is symmetric, that is, <sup>A</sup> <sup>⊂</sup> (A)∗. Now let the element

$$\left\{ \begin{pmatrix} g \\ \alpha \end{pmatrix}, \begin{pmatrix} g' \\ \alpha' \end{pmatrix} \right\}, \qquad g, g' \in \mathfrak{H}, \alpha, \alpha' \in \mathfrak{G}, \tag{2.7.13}$$

belong to (A)∗. Then for all <sup>f</sup> <sup>=</sup> {f,f- } ∈ S<sup>∗</sup> one has

$$\left( \begin{pmatrix} f' \\ -\Gamma\_1 \widehat{f} \end{pmatrix}, \begin{pmatrix} g \\ \alpha \end{pmatrix} \right) = \left( \begin{pmatrix} f \\ \Gamma\_0 \widehat{f} \end{pmatrix}, \begin{pmatrix} g' \\ \alpha' \end{pmatrix} \right)$$

or, equivalently,

$$(f',g) - (f,g') = (\Gamma\_1 \dot{f}, \alpha) + (\Gamma\_0 \dot{f}, \alpha'). \tag{2.7.14}$$

In particular, since ker Γ = S, it follows from (2.7.14) that if f <sup>∈</sup> <sup>S</sup>, then <sup>g</sup> <sup>∈</sup> <sup>S</sup>∗. Therefore, the abstract Green identity (2.1.1) together with (2.7.14) imply

$$(\Gamma\_1 \dot{f}, \Gamma\_0 \widehat{g}) - (\Gamma\_0 \dot{f}, \Gamma\_1 \widehat{g}) = (\Gamma\_1 \dot{f}, \alpha) + (\Gamma\_0 \dot{f}, \alpha')$$

for all f <sup>∈</sup> <sup>S</sup>∗. By definition, the mapping Γ : <sup>S</sup><sup>∗</sup> <sup>→</sup> <sup>G</sup> <sup>×</sup> <sup>G</sup> is surjective, and consequently

$$
\alpha = \Gamma\_0 \widehat{g} \quad \text{and} \quad \alpha' = -\Gamma\_1 \widehat{g}.
$$

Thus, the element in (2.7.13) belongs to <sup>A</sup>. Hence, <sup>A</sup> is a self-adjoint extension of S in H ⊕ G.

The Straus family <sup>ˇ</sup> <sup>T</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, in <sup>H</sup> corresponding to the self-adjoint relation <sup>A</sup> in (2.7.11) has the form

$$T(\lambda) = \left\{ \widehat{f} = \{f, f'\} \in S^\* : \left\{ \begin{pmatrix} f \\ \Gamma\_0 \widehat{f} \end{pmatrix}, \begin{pmatrix} f' \\ -\Gamma\_1 \widehat{f} \end{pmatrix} \right\} \in \tilde{A}, \ -\Gamma\_1 \widehat{f} = \lambda \Gamma\_0 \widehat{f} \right\},$$

and hence is given by (2.7.12). The statement concerning the compressed resolvent of <sup>A</sup> onto <sup>H</sup> follows from the Kre˘ın–Na˘ımark formula in Theorem 2.7.4. -

Finally, the Straus family and the compressed resolvent of the self-adjoint ˇ relation <sup>A</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup> can be regarded from a slightly different point of view. Thus far, the Straus family and the compressed resolvent were given as notions in the ˇ Hilbert space H; more structure was added by considering a closed symmetric relation <sup>S</sup> in <sup>H</sup> and assuming that <sup>A</sup> is a self-adjoint extension of <sup>S</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup>- . Now the role of the original space and the exit space will be interchanged and

a self-adjoint relation <sup>A</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup> will be viewed as a self-adjoint extension of the trivial symmetric relation S in H- . The Straus family ˇ T- (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, in <sup>H</sup>- corresponding to <sup>A</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup>is defined by

$$T'(\lambda) = \left\{ \{h, h'\} \in \mathfrak{H}' \times \mathfrak{H}' : \left\{ \begin{pmatrix} f \\ h \end{pmatrix}, \begin{pmatrix} f' \\ h' \end{pmatrix} \right\} \in \tilde{A}, f' = \lambda f \right\}$$

and the corresponding compressed resolvent R- (λ) ∈ **B**(H- ) is given by

$$P\_{\mathfrak{H}'}(\widetilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}'} = \left(T'(\lambda) - \lambda\right)^{-1}, \quad \lambda \in \rho(\widetilde{A});\tag{2.7.15}$$

here PH- : H ⊕ H- → H denotes the orthogonal projection from H ⊕ H onto H- and ιH- : H- → H ⊕ H is the canonical embedding of H into H ⊕ H- . The adjoint of the trivial symmetric relation S- = {0, 0} in H is (S- )<sup>∗</sup> = H- × Hand

$$
\Gamma\_0' \widehat{h} = h \quad \text{and} \quad \Gamma\_1' \widehat{h} = h', \quad \widehat{h} = \{h, h'\} \in (S')^\*,
$$

defines a boundary triplet {H- , Γ- <sup>0</sup>, Γ- <sup>1</sup>} for (S- )∗. Then A- <sup>0</sup> = {0} × H- , so that (A- <sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> = 0, <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, and the <sup>γ</sup>-field and the Weyl function are given by

$$
\gamma'(\lambda) = I \quad \text{and} \quad M'(\lambda) = \lambda I;
$$

cf. Example 2.4.2. In this situation the Straus family ˇ T- (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, in <sup>H</sup>- induces a Nevanlinna family <sup>τ</sup> (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, in the same space <sup>H</sup> as in (2.7.7) via

$$T'(\lambda) = \ker\left(\Gamma\_1' + \tau(\lambda)\Gamma\_0'\right),$$

so that τ (λ) = −T- (λ). Then the compressed resolvent (2.7.15) takes the form

$$P\_{\mathfrak{H}'} (\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}'} = - (\tau(\lambda) + \lambda)^{-1},\tag{2.7.16}$$

which can be viewed as the Kre˘ın–Na˘ımark formula in H for the extension <sup>A</sup> of <sup>S</sup>- .

For the self-adjoint relation <sup>A</sup> in Proposition 2.7.7 it turns out in this new context that the corresponding Straus family in <sup>ˇ</sup> <sup>G</sup> is given by the function <sup>−</sup>M.

**Proposition 2.7.8.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let M be the corresponding Weyl function. Consider the self-adjoint relation

$$\tilde{A} = \left\{ \left\{ \begin{pmatrix} f \\ \Gamma\_0 \widehat{f} \end{pmatrix}, \begin{pmatrix} f' \\ -\Gamma\_1 \widehat{f} \end{pmatrix} \right\} : \widehat{f} = \{f, f'\} \in S^\* \right\}.$$

in <sup>H</sup> <sup>⊕</sup> <sup>G</sup>. Then the corresponding Straus family in <sup>ˇ</sup> <sup>G</sup> is given by

$$\left\{ \left\{ \Gamma\_0 \widehat{f}, -\Gamma\_1 \widehat{f} \right\} \in \mathcal{G} \times \mathcal{G} : \left\{ \begin{pmatrix} f \\ \Gamma\_0 \widehat{f} \end{pmatrix}, \begin{pmatrix} f' \\ -\Gamma\_1 \widehat{f} \end{pmatrix} \right\} \in \widetilde{A}, \widehat{f} \in \widehat{\mathfrak{N}}\_{\lambda}(S^\*) \right\} \quad (2.7.17)$$

and coincides with <sup>−</sup>M(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Furthermore, the compressed resolvent of <sup>A</sup> to <sup>G</sup> is given by

$$P\_{\mathbb{S}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathbb{S}} = -(M(\lambda) + \lambda)^{-1}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R};\tag{2.7.18}$$

here P<sup>G</sup> : H ⊕ G → G denotes the orthogonal projection from H ⊕ G onto G and ι<sup>G</sup> : G → H ⊕ G is the canonical embedding of G into H ⊕ G.

Proof. It follows from the definition of the Straus family in ( ˇ 2.7.1) that the Straus ˇ family corresponding to <sup>A</sup> in the Hilbert space <sup>G</sup> has the form (2.7.17). Since {Γ0f, <sup>−</sup>Γ1<sup>f</sup> } belongs to (2.7.17) if and only if <sup>f</sup> <sup>∈</sup> <sup>N</sup> <sup>λ</sup>(S∗), it is also clear that for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the Straus family coincides with the values <sup>ˇ</sup> <sup>−</sup>M(λ) of the Weyl function corresponding to the boundary triplet {G, Γ0, Γ1}. The formula (2.7.18) follows from (2.7.16) in this special case. -

## **2.8 Perturbation problems**

Let A be a self-adjoint relation in the Hilbert space H, let V ∈ **B**(H) be a bounded self-adjoint operator in H, and consider the self-adjoint relation

$$B = A + V.\tag{2.8.1}$$

For λ ∈ ρ(A) ∩ ρ(B) one can rewrite (2.8.1) in the form

$$((B-\lambda)^{-1} - (A-\lambda)^{-1} = -(B-\lambda)^{-1}V(A-\lambda)^{-1};$$

this follows from Lemma 1.11.2 with H = A, R = λ − V and S = λ. In particular, if V in (2.8.1) belongs to some left-sided or right-sided operator ideal, then the same is true for the difference of the resolvents of A and B. From this point of view perturbation problems in the resolvent sense are more general than additive perturbations of the form (2.8.1). Such perturbation problems embed naturally in the framework of the extension theory that has been discussed in this chapter.

In the next theorem the particularly simple case of finite-rank perturbations is treated.

**Theorem 2.8.1.** Let A and B be self-adjoint relations in H and assume that

$$\dim \text{ran} \left( \left( B - \lambda\_0 \right)^{-1} - \left( A - \lambda\_0 \right)^{-1} \right) = n < \infty \tag{2.8.2}$$

for some, and hence for all λ<sup>0</sup> ∈ ρ(A)∩ρ(B). Then S = A∩B is a closed symmetric relation in <sup>H</sup> and there exists a boundary triplet {Cn, <sup>Γ</sup>0, <sup>Γ</sup>1} for <sup>S</sup><sup>∗</sup> such that

$$A = \ker \Gamma\_0 \qquad \text{and} \qquad B = \ker \Gamma\_1. \tag{2.8.3}$$

If γ and M are the γ-field and the Weyl function, respectively, corresponding to {C<sup>n</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1}, then

$$(B - \lambda)^{-1} - (A - \lambda)^{-1} = -\gamma(\lambda)M(\lambda)^{-1}\gamma(\overline{\lambda})^\* \tag{2.8.4}$$

for all λ ∈ ρ(A) ∩ ρ(B). Moreover, if λ ∈ ρ(A), then λ ∈ σp(B) if and only if 0 ∈ σp(M(λ)), and the multiplicities are at most n and coincide.

Proof. Let λ<sup>0</sup> ∈ ρ(A) ∩ ρ(B) be such that (2.8.2) holds and consider the closed symmetric relation S = A ∩ B in H. By construction, A and B are disjoint selfadjoint extensions of S, and hence

$$\text{ran}\,(S - \overline{\lambda}\_0) = \ker\left( (B - \overline{\lambda}\_0)^{-1} - (A - \overline{\lambda}\_0)^{-1} \right).$$

by Theorem 1.7.8. This leads to

$$\ker\left(S^\*-\lambda\_0\right) = \left(\text{ran}\left(S-\overline{\lambda}\_0\right)\right)^\perp = \text{ran}\left(\left(B-\lambda\_0\right)^{-1}-\left(A-\lambda\_0\right)^{-1}\right),$$

where (2.8.2) was used in the last equality. Now Theorem 1.7.8 implies that A and B are transversal self-adjoint extensions of S. Theorem 2.5.9 shows that there exists a boundary triplet {Cn, <sup>Γ</sup>0, <sup>Γ</sup>1} such that (2.8.3) holds, and the formula (2.8.4) follows from Theorem 2.6.1. One also concludes from (2.8.4) and the fact that M(λ) is bijective for λ ∈ ρ(A)∩ρ(B) (see Corollary 2.5.4) that the difference of the resolvents in (2.8.2) is of rank n for all λ ∈ ρ(A)∩ρ(B). The last statement on the eigenvalues of B follows from Theorem 2.6.2 (i). -

The following result is a generalization of Theorem 2.8.1 that applies to nonself-adjoint intermediate extensions B.

**Theorem 2.8.2.** Let A be a self-adjoint relation in H and let B be a closed relation in H such that ρ(B) = ∅. Then S = A ∩ B is a closed symmetric relation in H and there exist a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> and a closed operator Θ in G such that

$$A = \ker \Gamma\_0 \qquad \text{and} \qquad B = \ker \left(\Gamma\_1 - \Theta \Gamma\_0\right).$$

If γ and M are the γ-field and the Weyl function, respectively, corresponding to {G, Γ0, Γ1}, then

$$(B - \lambda)^{-1} - (A - \lambda)^{-1} = \gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\*$$

for all λ ∈ ρ(A) ∩ ρ(B). Moreover, for all λ ∈ ρ(A) one has λ ∈ σi(B) if and only if 0 ∈ σi(Θ−M(λ)), i = p, c,r, and for i = p the geometric multiplicities coincide.

Proof. It is clear that S = A ∩ B is a closed symmetric relation and hence there exists a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that A = ker Γ0; cf. Theorem 2.4.1. Since B is a closed extension of S, there exists a closed relation Θ in G such that B = ker (Γ<sup>1</sup> −ΘΓ0). By construction, the relations A and B are disjoint and hence it follows from Proposition 2.1.8 (i) that Θ is a closed operator in G. The resolvent formula and the assertion on the spectrum of B are immediate consequences of Theorem 2.6.1 and Theorem 2.6.2. -

#### 2.8. Perturbation problems 165

Let K and L be Hilbert spaces and let T ∈ **B**(K, L) be a compact operator. Recall that the singular values <sup>s</sup>k(T), <sup>k</sup> <sup>∈</sup> <sup>N</sup>, of <sup>T</sup> are defined as the eigenvalues of the nonnegative compact operator (<sup>T</sup> <sup>∗</sup>T)<sup>1</sup>/<sup>2</sup> <sup>∈</sup> **<sup>B</sup>**(K) (enumerated in nonincreasing order). The Schatten–von Neumann ideal Sp(K, L), 1 ≤ p < ∞, consists of all compact operators T ∈ **B**(K, L) such that the singular values are p-summable, that is,

$$\sum\_{k=1}^{\infty} (s\_k(T))^p < \infty.$$

If K = L the notation Sp(K) is used instead of Sp(K, K). Observe that the nonzero singular values of T coincide with the nonzero singular values of the restriction of T to (ker T)<sup>⊥</sup> as the corresponding restriction of T <sup>∗</sup>T is a nonnegative compact operator in the Hilbert space (ker T)⊥. This fact will be used in the proof of the following theorem.

**Theorem 2.8.3.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let AΘ<sup>1</sup> and AΘ<sup>2</sup> be closed extensions of S corresponding to closed relations Θ<sup>1</sup> and Θ<sup>2</sup> in G via (2.1.5), and assume that ρ(AΘ<sup>1</sup> )∩ρ(AΘ<sup>2</sup> ) = ∅ and ρ(Θ1) ∩ ρ(Θ2) = ∅. Then

$$\left(\left(A\_{\Theta\_1} - \lambda\right)^{-1} - \left(A\_{\Theta\_2} - \lambda\right)^{-1} \in \mathfrak{S}\_p(\mathfrak{H})\right) \tag{2.8.5}$$

for some, and hence for all λ ∈ ρ(AΘ<sup>1</sup> ) ∩ ρ(AΘ<sup>2</sup> ) if and only if

$$(\Theta\_1 - \xi)^{-1} - (\Theta\_2 - \xi)^{-1} \in \mathfrak{S}\_p(\mathcal{G}) \tag{2.8.6}$$

for some, and hence for all ξ ∈ ρ(Θ1) ∩ ρ(Θ2).

Proof. Let A<sup>0</sup> = ker Γ<sup>0</sup> and let γ and M be the γ-field and the Weyl function corresponding to the boundary triplet {G, Γ0, Γ1}. Then one has

$$\begin{aligned} \left(A\_{\Theta\_1} - \lambda\right)^{-1} &= \left(A\_0 - \lambda\right)^{-1} + \gamma(\lambda)\left(\Theta\_1 - M(\lambda)\right)^{-1}\gamma(\overline{\lambda})^\*,\\ \left(A\_{\Theta\_2} - \lambda\right)^{-1} &= \left(A\_0 - \lambda\right)^{-1} + \gamma(\lambda)\left(\Theta\_2 - M(\lambda)\right)^{-1}\gamma(\overline{\lambda})^\*,\end{aligned}$$

for all λ ∈ ρ(AΘ<sup>1</sup> ) ∩ ρ(AΘ<sup>2</sup> ) ∩ ρ(A0), and hence

$$\begin{split} \left( (A\_{\Theta\_1} - \lambda)^{-1} - (A\_{\Theta\_2} - \lambda)^{-1} \right)^{-1} \\ = \gamma(\lambda) \left[ \left( \Theta\_1 - M(\lambda) \right)^{-1} - \left( \Theta\_2 - M(\lambda) \right)^{-1} \right] \gamma(\overline{\lambda})^\*. \end{split} \tag{2.8.7}$$

It will be shown that (2.8.5) holds if and only if

$$\left(\Theta\_1 - M(\lambda)\right)^{-1} - \left(\Theta\_2 - M(\lambda)\right)^{-1} \in \mathfrak{S}\_p(\mathcal{G}).\tag{2.8.8}$$

In fact, it is clear that if (2.8.8) holds, then so does (2.8.5). Conversely, if (2.8.5) holds, then

$$\gamma(\lambda)\left[\left(\Theta\_1 - M(\lambda)\right)^{-1} - \left(\Theta\_2 - M(\lambda)\right)^{-1}\right] \gamma(\overline{\lambda})^\* \in \mathfrak{S}\_p(\mathfrak{H}) \tag{2.8.9}$$

follows directly from (2.8.7). Since γ(λ) is an isomorphism from G onto Nλ(S∗) and ker γ(λ)<sup>∗</sup> = Nλ(S∗)⊥, it follows that the restriction of γ(λ)<sup>∗</sup> to Nλ(S∗) is an isomorphism onto G. Hence, the operator in (2.8.9) may also be viewed as a bounded operator from Nλ(S∗) to Nλ(S∗) and thus belongs to the Schatten– von Neumann ideal Sp(Nλ(S∗), Nλ(S∗)). In this context γ(λ) : G → Nλ(S∗) and γ(λ)<sup>∗</sup> : Nλ(S∗) → G are boundedly invertible and hence it follows that (2.8.8) holds. Therefore, if λ ∈ ρ(A<sup>Θ</sup><sup>1</sup> ) ∩ ρ(A<sup>Θ</sup><sup>2</sup> ) ∩ ρ(A0), then (2.8.5) is equivalent to (2.8.8). Note that if (2.8.5) holds for some λ ∈ ρ(A<sup>Θ</sup><sup>1</sup> ) ∩ ρ(A<sup>Θ</sup><sup>2</sup> ), then it holds for all λ ∈ ρ(AΘ<sup>1</sup> ) ∩ ρ(AΘ<sup>2</sup> ) by Lemma 1.11.4.

It remains to show that for all λ ∈ ρ(AΘ<sup>1</sup> ) ∩ ρ(AΘ<sup>2</sup> ) ∩ ρ(A0) (2.8.8) is equivalent to (2.8.6). By Lemma 1.11.4,

$$\begin{aligned} \left(\Theta\_1 - M(\lambda)\right)^{-1} - \left(\Theta\_2 - M(\lambda)\right)^{-1} \\ &= \left[I - \left(\Theta\_1 - \xi\right)^{-1} (M(\lambda) - \xi)\right]^{-1} \\ &\quad \left[\left(\Theta\_1 - \xi\right)^{-1} - \left(\Theta\_2 - \xi\right)^{-1}\right] \left[I - \left(M(\lambda) - \xi\right)\left(\Theta\_2 - \xi\right)^{-1}\right]^{-1} \end{aligned}$$

and since the factors around (Θ<sup>1</sup> <sup>−</sup> <sup>ξ</sup>)−<sup>1</sup> <sup>−</sup> (Θ<sup>2</sup> <sup>−</sup> <sup>ξ</sup>)−<sup>1</sup> on the right-hand side are boundedly invertible by Lemma 1.11.3, this establishes the equivalence of (2.8.8) and (2.8.6). -

If Θ<sup>1</sup> and Θ<sup>2</sup> in Theorem 2.8.3 are bounded operators in G the condition ρ(Θ1) ∩ ρ(Θ2) = ∅ is automatically satisfied and the identity

$$(\Theta\_1 - \xi)^{-1} - (\Theta\_2 - \xi)^{-1} = (\Theta\_1 - \xi)^{-1} (\Theta\_2 - \Theta\_1)(\Theta\_2 - \xi)^{-1}$$

shows that (Θ<sup>1</sup> <sup>−</sup> <sup>ξ</sup>)−<sup>1</sup> <sup>−</sup> (Θ<sup>2</sup> <sup>−</sup> <sup>ξ</sup>)−<sup>1</sup> <sup>∈</sup> <sup>S</sup>p(G) for <sup>ξ</sup> <sup>∈</sup> <sup>ρ</sup>(Θ1) <sup>∩</sup> <sup>ρ</sup>(Θ2) if and only if Θ<sup>1</sup> − Θ<sup>2</sup> ∈ Sp(G). This leads to the following corollary.

**Corollary 2.8.4.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let AΘ<sup>1</sup> and AΘ<sup>2</sup> be closed extensions of S which correspond to bounded operators Θ1, Θ<sup>2</sup> ∈ **B**(G) via (2.1.5), and assume that ρ(AΘ<sup>1</sup> ) ∩ ρ(AΘ<sup>2</sup> ) = ∅. Then

$$(A\_{\Theta\_1} - \lambda)^{-1} - (A\_{\Theta\_2} - \lambda)^{-1} \in \mathfrak{S}\_p(\mathfrak{H})^\perp$$

for some, and hence for all λ ∈ ρ(AΘ<sup>1</sup> ) ∩ ρ(AΘ<sup>2</sup> ) if and only if

$$
\Theta\_1 - \Theta\_2 \in \mathfrak{S}\_p(\mathcal{G}).
$$

The following proposition is an addendum to Theorem 2.8.2 in the special case where B is an Sp-perturbation of A in the resolvent sense.

**Proposition 2.8.5.** Let A be a self-adjoint relation in H, let B be a closed relation in H with ρ(B) = ∅, assume that

$$((B - \lambda\_0)^{-1} - (A - \lambda\_0)^{-1} \in \mathfrak{S}\_p(\mathfrak{H})\tag{2.8.10}$$

for some λ<sup>0</sup> ∈ ρ(A) ∩ ρ(B) and that (2.8.10) is not a finite-rank operator. Let S = A ∩ B and {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> as in Theorem 2.8.2 such that

$$A = \ker \Gamma\_0 \qquad and \qquad B = \ker \left(\Gamma\_1 - \Theta \Gamma\_0\right)$$

for some closed operator Θ in G. If ρ(Θ) = ∅, then Θ is an unbounded closed operator and (Θ <sup>−</sup> <sup>ξ</sup>)−<sup>1</sup> <sup>∈</sup> <sup>S</sup>p(G) for all <sup>ξ</sup> <sup>∈</sup> <sup>ρ</sup>(Θ).

Proof. Assume that (2.8.10) holds and that ρ(Θ) = ∅. As Θ<sup>0</sup> = {0} × G is the self-adjoint relation in <sup>G</sup> which corresponds to <sup>A</sup> = ker Γ<sup>0</sup> and (Θ<sup>0</sup> <sup>−</sup> <sup>ξ</sup>)−<sup>1</sup> = 0 for <sup>ξ</sup> <sup>∈</sup> <sup>C</sup>, one concludes from Theorem 2.8.3 and (2.8.10) that

$$(\Theta - \xi)^{-1} = (\Theta - \xi)^{-1} - (\Theta\_0 - \xi)^{-1} \in \mathfrak{S}\_p(\mathcal{G}), \quad \xi \in \rho(\Theta).$$

Together with the assumption that (2.8.10) is of infinite rank, this implies that Θ is an unbounded closed operator in G. -

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 3**

## **Spectra, Simple Operators, and Weyl Functions**

In this chapter the spectrum of a self-adjoint operator or relation will be completely characterized in terms of the analytic behavior and the limit properties of the Weyl function. In order to be able to treat the different parts of the spectrum, a short introduction to finite Borel measures on R and the corresponding Borel transforms will be given in Section 3.1 and Section 3.2. The notions and some properties of the absolutely continuous, singular continuous, pure point, and other spectral subsets of a self-adjoint relation are recalled in Section 3.3. Moreover, the concepts of simplicity (or complete non-self-adjointness) and local simplicity of symmetric operators and relations will be explained in detail in Section 3.4. For a boundary triplet {G, Γ0, Γ1} with corresponding Weyl function M, the spectrum of the self-adjoint extension A<sup>0</sup> = ker Γ<sup>0</sup> is then characterized. An analytic description for the point spectrum of A<sup>0</sup> in terms of M is given in Section 3.5, the rest of the spectrum and its different parts, namely absolutely continuous, singular, and continuous spectrum are studied in Section 3.6 under the additional condition that the underlying symmetric relation S is simple or locally simple. The limit properties of the Weyl function are also connected with defect elements belonging to the domain or range of A0. This is discussed in Section 3.7. Finally, it is shown with the help of tranformation properties of boundary triplets and Weyl functions in Section 3.8 how the earlier results in this chapter extend to a description of the spectrum of an arbitrary self-adjoint extension AΘ.

## **3.1 Analytic descriptions of minimal supports of Borel measures**

A Borel measure on R can be decomposed with respect to the Lebesgue measure into an absolutely continuous measure and a singular measure. The minimal supports of the measure and its parts can be described by means of the derivative of

the measure. The present interest is in an analytic description of these minimal supports in terms of the Borel transform. For the convenience of the reader, a brief review on Borel measures on R and some properties of their Borel transforms are recalled.

In the following let μ be a regular Borel measure on R and denote the Lebesgue measure on R by m. Recall that any Borel measure on R which is finite on compact sets is automatically regular. Associated with the regular Borel measure μ is the nondecreasing, left-continuous function

$$\nu\_{\mu}(x) = \begin{cases} \mu([0, x)), & x > 0, \\ 0, & x = 0, \\ -\mu([x, 0)), & x < 0, \end{cases} \tag{3.1.1}$$

on R. Observe that ν<sup>μ</sup> is bounded if and only if μ is a finite measure, that the derivative ν- <sup>μ</sup> of the nondecreasing function ν<sup>μ</sup> exists m-almost everywhere, and that

$$
\mu([x, y)) = \nu\_{\mu}(y) - \nu\_{\mu}(x), \qquad x < y. \tag{3.1.2}
$$

It is important to note that via (3.1.2) the function ν<sup>μ</sup> induces a Lebesgue-Stieltjes measure on R, which is a complete measure that coincides with the completion of μ. In the following it is often more convenient to work with this completion, which will also be denoted by μ, and the corresponding μ-measurable subsets of R.

The regular Borel measure μ has a Lebesgue decomposition with respect to the Lebesgue measure m:

$$\mu = \mu\_{\rm ac} + \mu\_{\rm s},$$

where the measure μac is absolutely continuous and the measure μ<sup>s</sup> is singular, each with respect to the Lebesgue measure. The singular measure μ<sup>s</sup> is further decomposed into the singular continuous part μsc and the pure point part μp, so that

$$
\mu = \mu\_{\rm ac} + \mu\_{\rm sc} + \mu\_{\rm P}.
$$

The corresponding nondecreasing, left-continuous functions νμac , νμsc , and νμ<sup>p</sup> defined via (3.1.1), are absolutely continuous, continuous with ν- <sup>μ</sup>sc = 0 m-almost everywhere, and a step function, respectively, and

$$
\nu\_{\mu} = \nu\_{\mu\_{\rm ac}} + \nu\_{\mu\_{\rm ac}} + \nu\_{\mu\_{\rm p}}.
$$

Furthermore,

$$
\mu\_{\rm ac}(\mathfrak{B}) = \int\_{\mathfrak{B}} \nu\_{\mu}'(x) \, dm(x) \tag{3.1.3}
$$

for all Borel sets B, and hence the derivative ν- <sup>μ</sup> coincides with the Radon– Nikod´ym derivative of μac m-almost everywhere.

For <sup>x</sup> <sup>∈</sup> <sup>R</sup> the derivative <sup>μ</sup>- (x) of the Borel measure μ with respect to the Lebesgue measure m is defined by

$$\mu'(x) = \lim\_{m(I\_x)\downarrow 0} \left\{ \frac{\mu(I\_x)}{m(I\_x)} : I\_x \text{ an interval containing } x \right\},\tag{3.1.4}$$

whenever the limit exists and takes values in [0, ∞]. It can be shown that the sets

$$\mathfrak{E}\_0 = \left\{ x \in \mathbb{R} \, : \, \mu'(x) \text{ exists finitely} \right\} \tag{3.1.5}$$

and

$$\mathfrak{E} = \left\{ x \in \mathbb{R} \, : \, \mu'(x) \text{ exists finitely or infinitely} \right\} \tag{3.1.6}$$

are Borel sets, and for the set <sup>R</sup> \ <sup>E</sup><sup>0</sup> on which the derivative <sup>μ</sup> does not exist finitely one has that

$$m(\mathbb{R} \mid \mathfrak{E}\_0) = 0,\tag{3.1.7}$$

while for the set <sup>R</sup>\<sup>E</sup> on which the derivative <sup>μ</sup> does not exist finitely or infinitely one has that

$$m(\mathbb{R} \mid \mathfrak{E}) = 0 \quad \text{and} \quad \mu(\mathbb{R} \mid \mathfrak{E}) = 0; \tag{3.1.8}$$

note that <sup>R</sup> \ <sup>E</sup> <sup>⊂</sup> <sup>R</sup> \ <sup>E</sup>0. Recall also that the derivative <sup>ν</sup>- <sup>μ</sup> of the function ν<sup>μ</sup> in (3.1.2) and the derivative μ in (3.1.4) of the measure μ coincide m-almost everywhere.

<sup>A</sup> <sup>μ</sup>-measurable set <sup>S</sup> <sup>⊂</sup> <sup>R</sup> is called a support of <sup>μ</sup> if <sup>μ</sup>(<sup>R</sup> \ <sup>S</sup>) = 0. In particular, this implies that <sup>μ</sup>(A) = <sup>μ</sup>(<sup>A</sup> <sup>∩</sup> <sup>S</sup>) for all <sup>μ</sup>-measurable sets <sup>A</sup> <sup>⊂</sup> <sup>R</sup>. A support <sup>S</sup> <sup>⊂</sup> <sup>R</sup> of <sup>μ</sup> is called minimal if for subsets <sup>S</sup><sup>0</sup> <sup>⊂</sup> <sup>S</sup> that are <sup>μ</sup>-measurable and m-measurable, μ(S0) = 0 implies m(S0) = 0. A minimal support is not uniquely defined. The next auxiliary lemma provides some useful properties of minimal supports.

**Lemma 3.1.1.** Let μ be a Borel measure on R and let S, S- <sup>⊂</sup> <sup>R</sup> be sets that are measurable with respect to μ and m.


Proof. (i) Since SΔS- <sup>⊂</sup> ((<sup>R</sup> \ <sup>S</sup>) <sup>∪</sup> (<sup>R</sup> \ <sup>S</sup>- )) and both S and S are supports for μ, one has

$$
\mu(\mathfrak{S}\Delta\mathfrak{S}') \le \mu(\mathbb{R} \backslash \mathfrak{S}) + \mu(\mathbb{R} \backslash \mathfrak{S}') = 0.
$$

In particular, μ(S \ S- ) = 0. Now S \ S- ⊂ S is μ-measurable and m-measurable, and since S is a minimal support, it follows that m(S\S- ) = 0. A similar argument shows that m(S- \ S) = 0. Hence, m(SΔS- ) = 0.

(ii) From <sup>R</sup> \ <sup>S</sup>- = ((<sup>R</sup> \ <sup>S</sup>) <sup>∪</sup> (<sup>S</sup> \ <sup>S</sup>- )) \ (S-\ S) one concludes that

$$
\mu(\mathbb{R} \mid \mathfrak{S}') \le \mu(\mathbb{R} \mid \mathfrak{S}) + \mu(\mathfrak{S} \mid \mathfrak{S}').
$$

Since S is a support of μ and it is assumed that μ(S \ S- ) = 0, it follows that <sup>μ</sup>(<sup>R</sup> \ <sup>S</sup>- ) = 0. Hence, Sis a support of μ.

To prove that S is a minimal support for μ, let S<sup>0</sup> ⊂ S be μ-measurable and m-measurable, and assume that m(S0) > 0. Since

$$\mathfrak{S}\_0 = (\mathfrak{S}\_0 \cap \mathfrak{S}) \cup \{\mathfrak{S}\_0 \cap (\mathfrak{S}' \backslash \mathfrak{S})\}\tag{3.1.9}$$

and m(S- \ S) = 0 by assumption, it follows that m(S<sup>0</sup> ∩ S) = m(S0) > 0. As S is a minimal support for μ, this implies μ(S<sup>0</sup> ∩ S) > 0. Therefore, (3.1.9) leads to

$$
\mu(\mathfrak{S}\_0) = \mu(\mathfrak{S}\_0 \cap \mathfrak{S}) + \mu\{\mathfrak{S}\_0 \cap (\mathfrak{S}' \backslash \mathfrak{S})\} \ge \mu(\mathfrak{S}\_0 \cap \mathfrak{S}) > 0.
$$

Thus, Sis a minimal support for μ. -

Minimal supports for the parts of the spectrum in the Lebesgue decomposition can be expressed in terms of the behavior of the derivative μ- ; cf. [335, Lemma 4] (see also [676, 682]).

**Theorem 3.1.2.** Let μ be a regular Borel measure on R. Then the following sets


$$\text{(iv) } \{ x \in \mathfrak{E} : \mu'(x) = \infty, \,\mu(\{x\}) = 0 \};$$

(v) {x ∈ E : μ- (x) = ∞, μ({x}) > 0},

are minimal supports for μ, μac, μs, μsc, and μp, respectively.

For practical reasons the attention is now restricted to finite Borel measures on R. The properties of such measures are reflected by the boundary behavior of their so-called Borel transform in a sense to be made precise; cf. Appendix A.

**Definition 3.1.3.** Let μ be a finite Borel measure on R. Then the Borel transform F of μ is the function F defined by

$$F(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\mu(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{3.1.10}$$

If for some <sup>x</sup> <sup>∈</sup> <sup>R</sup> the limit lim<sup>y</sup> <sup>↓</sup> <sup>0</sup> <sup>F</sup>(x+iy) exists and takes values in [0, <sup>∞</sup>], it will be denoted by F(x + i0). The set of points in R where the limit of the imaginary part of F exists and takes values in [0, ∞] is denoted by

$$\mathfrak{F} = \left\{ x \in \mathbb{R} : \operatorname{Im} F(x + i0) \text{ exists finitely or infinitely} \right\}. \tag{3.1.11}$$

$$\square$$

It follows from the integral representation (3.1.10) that

$$\begin{aligned} y \operatorname{Re} F(x+iy) &= \int\_{\mathbb{R}} \frac{(t-x)y}{(t-x)^2 + y^2} \, d\mu(t), \\ y \operatorname{Im} F(x+iy) &= \int\_{\mathbb{R}} \frac{y^2}{(t-x)^2 + y^2} \, d\mu(t), \end{aligned}$$

and hence, by dominated convergence,

$$\lim\_{y \downarrow 0} y \operatorname{Re} F(x+iy) = 0 \quad \text{and} \quad \lim\_{y \downarrow 0} y \operatorname{Im} F(x+iy) = \mu(\{x\}) \tag{3.1.12}$$

for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>; cf. Lemma A.2.6. In particular,

$$\lim\_{y \downarrow 0} y \, F(x+iy) = \lim\_{y \downarrow 0} iy \, \text{Im} \, F(x+iy) \tag{3.1.13}$$

for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Note also that the Borel transform <sup>F</sup> is a Nevanlinna function (see Definition A.2.3) and μ(R) = supy><sup>0</sup> y Im F(iy). Conversely, every Nevanlinna function F with

$$\sup\_{y>0} y \operatorname{Im} F(iy) < \infty \quad \text{and} \quad \lim\_{y \to \infty} F(iy) = 0$$

is the Borel transform of a finite Borel measure μ as in (3.1.10); cf. Proposition A.5.3.

An important observation concerning the boundary values Im F(x + i0) is contained in the following theorem, which is formulated in terms of the symmetric derivative

$$\mu(D\mu)(x) = \lim\_{\epsilon \downarrow 0} \frac{\mu((x-\epsilon, x+\epsilon))}{2\epsilon} \tag{3.1.14}$$

of <sup>μ</sup>. Here the limit is assumed to take values in [0, <sup>∞</sup>]. Note that if for some <sup>x</sup> <sup>∈</sup> <sup>R</sup> the derivative μ- (x) in (3.1.4) exists with values in [0, ∞], then the same is true for the symmetric derivative (Dμ)(x).

**Theorem 3.1.4.** Let μ be a finite Borel measure on R, let F be its Borel transform, and let <sup>x</sup> <sup>∈</sup> <sup>R</sup>. If the symmetric derivative (Dμ)(x) exists with values in [0, <sup>∞</sup>], then also Im F(x + i0) exists with values in [0, ∞] and

$$\operatorname{Im} F(x+i0) = \pi(D\mu)(x) \quad (\in [0,\infty]).\tag{3.1.15}$$

In particular, the following statements hold:


Proof. Assume first that the symmetric derivative (Dμ)(x) exists in [0, ∞) for some <sup>x</sup> <sup>∈</sup> <sup>R</sup> and choose <sup>c</sup>−, c<sup>+</sup> <sup>∈</sup> <sup>R</sup> with <sup>c</sup><sup>−</sup> <sup>&</sup>lt; (Dμ)(x) < c+. From the definition (3.1.14) it follows that there exists δ > 0 such that

$$2c\\_\epsilon \le \mu(I\_\epsilon) \le 2c\_+\epsilon, \qquad I\_\epsilon := (x-\epsilon, x+\epsilon), \tag{3.1.16}$$

holds for all <sup>∈</sup> (0, δ]. In the following set <sup>K</sup>y(s) := <sup>y</sup> <sup>s</sup>2+y<sup>2</sup> for y > 0 and <sup>s</sup> <sup>∈</sup> <sup>R</sup>. Then one has

$$\begin{split} \operatorname{Im} F(x+iy) &= \int\_{\mathbb{R}} \frac{y}{(x-t)^2 + y^2} \, d\mu(t) \\ &= \int\_{\mathbb{R}} K\_y(x-t) \, d\mu(t) \\ &= \int\_{I\_\delta} K\_y(x-t) \, d\mu(t) + \int\_{\mathbb{R}\backslash I\_\delta} K\_y(x-t) \, d\mu(t) \end{split} \tag{3.1.17}$$

for y > 0. First one estimates the second term on the right-hand side in (3.1.17). Since <sup>t</sup> <sup>∈</sup> <sup>R</sup> \ <sup>I</sup>δ, one has <sup>|</sup><sup>t</sup> <sup>−</sup> <sup>x</sup>| ≥ <sup>δ</sup>, so that 0 <sup>≤</sup> <sup>K</sup>y(<sup>t</sup> <sup>−</sup> <sup>x</sup>) <sup>≤</sup> <sup>K</sup>y(δ). Then it is clear that

$$0 \le \int\_{\mathbb{R}\backslash I\_{\delta}} K\_y(x - t) \, d\mu(t) \le K\_y(\delta)\mu(\mathbb{R}) \to 0 \tag{3.1.18}$$

for y ↓ 0. In order to estimate the first integral on the right-hand side in (3.1.17) one uses the identity

$$\int\_{I\_\delta} K\_y(t - x) \, d\mu(t) = \mu(I\_\delta) K\_y(\delta) - \int\_0^\delta K\_y'(\epsilon) \mu(I\_\epsilon) \, d\epsilon. \tag{3.1.19}$$

To prove (3.1.19), observe that

$$\begin{aligned} \int\_0^\delta K\_y'(\epsilon) \mu(I\_\epsilon) \, d\epsilon &= \int\_0^\delta \int\_{x-\epsilon}^{x+\epsilon} K\_y'(\epsilon) \, d\mu(t) \, d\epsilon \\ &= \int\_{x-\delta}^x \int\_{x-t}^\delta K\_y'(\epsilon) \, d\epsilon \, d\mu(t) + \int\_x^{x+\delta} \int\_{t-x}^\delta K\_y'(\epsilon) \, d\epsilon \, d\mu(t) \\ &= \mu(I\_\delta) K\_y(\delta) - \int\_{x-\delta}^{x+\delta} K\_y(t-x) \, d\mu(t), \end{aligned}$$

where Fubini's theorem on the triangle in the (t, )-plane given by = t − x, = x − t, with 0 ≤ ≤ δ, was used. Now integration by parts, the fact that (3.1.16), −K- <sup>y</sup>() ≥ 0 for , y > 0, and (3.1.19) give the estimate

$$\begin{aligned} 2c\_- \arctan(\delta/y) &= 2c\_- \int\_0^\delta K\_y(\epsilon) \, d\epsilon \\ &= 2c\_- \delta K\_y(\delta) + 2c\_- \int\_0^\delta \left(-\epsilon K\_y'(\epsilon)\right) \, d\epsilon \\ &\leq \mu(I\_\delta) K\_y(\delta) - \int\_0^\delta K\_y'(\epsilon) \mu(I\_\epsilon) \, d\epsilon \\ &= \int\_{I\_\delta} K\_y(t-x) \, d\mu(t). \end{aligned}$$

In the same way one verifies the estimate

$$\int\_{I\_{\delta}} K\_y(t - x) \, d\mu(t) \le 2c\_+ \arctan(\delta/y).$$

It follows that

$$\pi c\_- \le \liminf\_{y \downarrow 0} \int\_{I\_\delta} K\_y(t - x) \, d\mu(t) \le \limsup\_{y \downarrow 0} \int\_{I\_\delta} K\_y(t - x) \, d\mu(t) \le \pi c\_+.$$

Now (3.1.18) and (3.1.17) imply

$$
\pi c\_- \le \liminf\_{y \downarrow 0} \operatorname{Im} F(x+iy) \le \limsup\_{y \downarrow 0} \operatorname{Im} F(x+iy) \le \pi c\_+.
$$

Letting c<sup>−</sup> ↑ (Dμ)(x) and c<sup>+</sup> ↓ (Dμ)(x), one obtains

$$\lim\_{y \downarrow 0} \text{Im}\, F(x+iy) = \pi(D\mu)(x).$$

Next the case where the symmetric derivative (Dμ)(x) exists and equals ∞ for some <sup>x</sup> <sup>∈</sup> <sup>R</sup> is discussed. In this situation the above reasoning leads to

$$
\pi c\_- \le \liminf\_{y \downarrow 0} \operatorname{Im} F(x+iy),
$$

for all c<sup>−</sup> > 0. This yields limy↓<sup>0</sup> Im F(x + iy) = ∞.

It remains to show assertions (i) and (ii). Recall that if μ- (x) exists at some point <sup>x</sup> <sup>∈</sup> <sup>R</sup>, then so does the symmetric derivative (Dμ)(x) and

$$
\mu'(x) = (D\mu)(x),
$$

with equality in [0, ∞]. For (ii) the above reasoning implies that the set E in (3.1.6) is contained in the set <sup>F</sup> in (3.1.11) and hence <sup>μ</sup>(<sup>R</sup> \ <sup>F</sup>) = 0 and <sup>m</sup>(<sup>R</sup> \ <sup>F</sup>) = 0 by (3.1.8). Assertion (i) follows in the same way from (3.1.5) and (3.1.7). -

It follows from Theorem 3.1.4 and (3.1.12) that Theorem 3.1.2 has a counterpart expressing minimal supports in terms of the Borel transform of μ.

**Theorem 3.1.5.** Let μ be a finite Borel measure and let F be its Borel transform. Then the sets

(i) {x ∈ F : 0 < Im F(x + i0) ≤ ∞}; (ii) {x ∈ F : 0 < Im F(x + i0) < ∞};

(iii) {x ∈ F : Im F(x + i0) = ∞};

(iv) {x ∈ F : Im F(x + i0) = ∞, limy↓<sup>0</sup> y Im F(x + iy)=0};

(v) {x ∈ F : Im F(x + i0) = ∞, limy↓<sup>0</sup> y Im F(x + iy) > 0},

are minimal supports for μ, μac, μs, μsc, and μp, respectively.

Proof. Only statement (i) will be proved. The proofs of the other statements are similar. Let

$$\mathfrak{M} = \{ x \in \mathfrak{E} : 0 < \mu'(x) \le \infty \},$$

and note that M is a Borel set. Recall that, by Theorem 3.1.2 (i), M is a minimal support for μ. Now introduce the set

$$
\mathfrak{M}' = \left\{ x \in \mathfrak{F} \, : \, 0 < \mathrm{Im} \, F(x + i0) \le \infty \right\},
$$

which is also a Borel set, as Im F(x + iy), y > 0, and hence Im F(x + i0) are Borel measurable functions in x. Then Theorem 3.1.4 shows that M ⊂ M and furthermore one has

$$
\mathfrak{M}' \backslash \mathfrak{M} \subset \mathbb{R} \backslash \mathfrak{E}.
$$

Since <sup>m</sup>(R\E) = 0 according to (3.1.8), it follows that <sup>m</sup>(M- \M) = 0 and as M ⊂ M- , and M is a minimal support for μ, one concludes from Lemma 3.1.1 (ii) that M is a minimal support for μ. -

Most of the results in this section have been stated in the context of finite Borel measures on R and their Borel transforms. They will be applied to study the spectrum of self-adjoint relations and operators in Section 3.6. However, it is also useful for later references to have similar results in the more general context of scalar Nevanlinna functions and the corresponding spectral functions; cf. Chapter 6 and Chapter 7. Let N be a scalar Nevanlinna function of the form

$$N(\lambda) = \alpha + \beta \lambda + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) \, d\tau(t), \quad \lambda \in \mathbb{C} \,\,\big|\,\mathbb{R}, \tag{3.1.20}$$

where <sup>α</sup> <sup>∈</sup> <sup>R</sup>, <sup>β</sup> <sup>≥</sup> 0, and <sup>τ</sup> is a Borel measure on <sup>R</sup> which satisfies

$$\int\_{\mathbb{R}} \frac{1}{t^2 + 1} \, d\tau(t) < \infty;\tag{3.1.21}$$

cf. Theorem A.2.5. Then the last condition implies that μ defined by

$$d\mu(t) = \frac{d\tau(t)}{t^2 + 1} \tag{3.1.22}$$

is a finite Borel measure on R. Let F be the Borel transform of μ:

$$F(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\mu(t), \quad \lambda \in \mathbb{C} \,\backslash \,\mathbb{R}.\tag{3.1.23}$$

The connection between N and F is given in the following lemma.

**Lemma 3.1.6.** The Nevanlinna function N in (3.1.20) and the Borel transform F in (3.1.23) are connected by

$$N(\lambda) = a + b\lambda + (\lambda^2 + 1)F(\lambda), \quad \lambda \in \mathbb{C} \backslash \mathbb{R},\tag{3.1.24}$$

where a, b <sup>∈</sup> <sup>R</sup>. If <sup>x</sup> <sup>∈</sup> <sup>R</sup>, then the limits Im <sup>N</sup>(<sup>x</sup> <sup>+</sup> <sup>i</sup>0) and Im <sup>F</sup>(<sup>x</sup> <sup>+</sup> <sup>i</sup>0) exist simultaneously with values in [0, ∞], and in that case

$$\operatorname{Im} N(x+i0) = (x^2+1)\operatorname{Im} F(x+i0) \quad \left(\in [0,\infty]\right). \tag{3.1.25}$$

Moreover, for each <sup>x</sup> <sup>∈</sup> <sup>R</sup>,

$$\lim\_{y \downarrow 0} y \operatorname{Re} N(x + iy) = 0 \tag{3.1.26}$$

and

$$\lim\_{y \downarrow 0} y \operatorname{Im} N(x+iy) = (x^2+1) \lim\_{y \downarrow 0} y \operatorname{Im} F(x+iy). \tag{3.1.27}$$

Proof. It is an immediate consequence of the integral representation (3.1.20) that N can be rewritten as

$$N(\lambda) = \alpha + \lambda \left(\beta + \int\_{\mathbb{R}} d\mu(t)\right) + (\lambda^2 + 1) \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\mu(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R};$$

cf. Theorem A.2.4. This leads to (3.1.24). Note that for λ = x + iy one has

$$N(x+iy) = a + b(x+iy) + ((x+iy)^2 + 1)F(x+iy),$$

whence

$$\operatorname{Im} N(x+iy) = by + (x^2+1-y^2)\operatorname{Im} F(x+iy) + 2xy \operatorname{Re} F(x+iy).$$

Now observe that for each <sup>x</sup> <sup>∈</sup> <sup>R</sup> one has limy↓<sup>0</sup> <sup>y</sup> Re <sup>F</sup>(<sup>x</sup> <sup>+</sup> iy) = 0 by (3.1.12). Together with the previous identity this proves the assertion in (3.1.25). Furthermore, now one sees (3.1.27) directly; cf. (3.1.12). Finally, note that

$$\operatorname{Re} N(x+iy) = a + bx + (x^2 + 1 - y^2)\operatorname{Re} F(x+iy) - 2xy \operatorname{Im} F(x+iy),$$

which together with (3.1.12) leads to the identity (3.1.26). -

The next corollary deals with the existence of the limit lim↓<sup>0</sup> N(x + i) for any scalar Nevanlinna function N.

**Corollary 3.1.7.** Let N be a scalar Nevanlinna function. Then the limit N(x + i0) exists finitely m-almost everywhere.

Proof. It is clear from (3.1.25) and Theorem 3.1.4 that lim <sup>↓</sup> <sup>0</sup> Im N(x + i) exists finitely m-almost everywhere. Hence, it suffices to show that

$$\lim\_{\epsilon \downarrow 0} \text{Re}\, N(x + i\epsilon) \tag{3.1.28}$$

exists finitely <sup>m</sup>-almost everywhere. Denote by √· the branch of the square root fixed by Im <sup>√</sup> λ > 0 for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>\[0, <sup>∞</sup>) and <sup>√</sup> λ ≥ 0 for λ ∈ [0, ∞). Then it is easy to

$$\square$$

see that Im N(λ) <sup>≥</sup> 0 and Im (<sup>i</sup> N(λ)) <sup>≥</sup> 0 for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and hence <sup>λ</sup> → N(λ) and λ → i N(λ) are scalar Nevanlinna functions when they are extended to C<sup>−</sup> by symmetry. It follows from (3.1.25) and Theorem 3.1.4 that the limits

$$\lim\_{\epsilon \downarrow 0} \operatorname{Im} \sqrt{N(x + i\epsilon)} \quad \text{and} \quad \lim\_{\epsilon \downarrow 0} \operatorname{Re} \sqrt{N(x + i\epsilon)} = \lim\_{\epsilon \downarrow 0} \operatorname{Im} \left( i\sqrt{N(x + i\epsilon)} \right)$$

exist finitely m-almost everywhere. Since

$$\operatorname{Re} N(x+i\epsilon) = \left(\operatorname{Re}\sqrt{N(x+i\epsilon)}\right)^2 - \left(\operatorname{Im}\sqrt{N(x+i\epsilon)}\right)^2$$

it follows that the limit in (3.1.28) exists finitely m-almost everywhere. -

Let τ be the Borel measure on R in (3.1.20) which satisfies the condition (3.1.21). It has the Lebesgue decomposition

$$
\tau = \tau\_{\rm ac} + \tau\_{\rm s}, \quad \tau\_{\rm s} = \tau\_{\rm sc} + \tau\_{\rm p},
$$

where τac is absolutely continuous, τ<sup>s</sup> is singular, τsc is singular continuous, and τ<sup>p</sup> is pure point. In the next corollary, which is a consequence of Theorem 3.1.5, (3.1.22), and (3.1.25), minimal supports for these measures are expressed in terms of the boundary behavior of N.

**Corollary 3.1.8.** Let N be a Nevanlinna function with the integral representation (3.1.20). Then the sets


are minimal supports for τ , τac, τs, τsc, and τp, respectively.

## **3.2 Growth points of finite Borel measures**

Let μ be a finite Borel measure on R. In this section the set of its growth points σ(μ), defined by

$$\sigma(\mu) = \{x \in \mathbb{R} \, : \, \mu\big( (x - \epsilon, x + \epsilon) \big) > 0 \text{ for all } \epsilon > 0\},\tag{3.2.1}$$

is studied. The growth points σ(μ) and the growth points σ(μac), σ(μs), and σ(μsc) of the absolutely continuous, singular, and singular continuous part of μ will be located by means of the minimal supports expressed in terms of the Borel transform of μ.

There is an intimate connection between the set of growth points σ(μ) and supports for μ.

**Lemma 3.2.1.** Let μ be a finite Borel measure on R. Then the following statements hold:


Proof. (i) Let <sup>S</sup> be a support of <sup>μ</sup>, so that <sup>μ</sup>(<sup>R</sup> \ <sup>S</sup>) = 0. Assume that <sup>x</sup> <sup>∈</sup> <sup>σ</sup>(μ), so that for any > 0 one has μ((x − , x + )) > 0. Since S is a support of μ, it follows that

$$0 < \mu\{(x-\epsilon, x+\epsilon)\} = \mu\{(x-\epsilon, x+\epsilon)\cap\mathfrak{S}\},$$

which implies that for any > 0 the set (x − , x + ) ∩ S is nonempty. Hence, there exists a sequence x<sup>n</sup> ∈ (x − 1/n, x + 1/n) ∩ S converging to x from inside S. This shows that σ(μ) ⊂ S.

(ii) In order to show that <sup>σ</sup>(μ) is closed, let <sup>x</sup><sup>n</sup> <sup>∈</sup> <sup>σ</sup>(μ) converge to <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Assume that x ∈ σ(μ). Then there is > 0 such that μ((x − , x + )) = 0. For this there exist <sup>n</sup><sup>0</sup> <sup>∈</sup> <sup>N</sup> and <sup>0</sup> <sup>&</sup>gt; 0 with (xn<sup>0</sup> <sup>−</sup> 0, xn<sup>0</sup> <sup>+</sup> 0) <sup>⊂</sup> (<sup>x</sup> <sup>−</sup> , x <sup>+</sup> ), and hence

$$
\mu\{(x\_{n\_0} - \epsilon\_0, x\_{n\_0} + \epsilon\_0)\} \le \mu\{(x - \epsilon, x + \epsilon)\} = 0,
$$

a contradiction, since xn<sup>0</sup> ∈ σ(μ). Therefore, x ∈ σ(μ) and σ(μ) is closed.

Next it will be verified that <sup>σ</sup>(μ) is a support for <sup>μ</sup>. For each <sup>x</sup> <sup>∈</sup> <sup>R</sup> \ <sup>σ</sup>(μ) there is <sup>x</sup> > 0 such that μ((x − x, x + x)) = 0. Since the set σ(μ) is closed, it follows that the open intervals (x−x, x+x), <sup>x</sup> <sup>∈</sup> <sup>R</sup>\σ(μ), form an open cover for <sup>R</sup> \ <sup>σ</sup>(μ). Then there is a countable subcover of open intervals <sup>I</sup><sup>n</sup> with <sup>μ</sup>(In)=0 for <sup>R</sup> \ <sup>σ</sup>(μ). It follows that

$$\mu(\mathbb{R} \backslash \sigma(\mu)) \le \sum\_{n} \mu(I\_n) = 0$$

and hence <sup>μ</sup>(<sup>R</sup> \ <sup>σ</sup>(μ)) = 0, that is, <sup>σ</sup>(μ) is a support for <sup>μ</sup>. -

For completeness it is noted that in general the set σ(μ) is not a minimal support of μ. Observe also that, by Lemma 3.2.1, the set of growth points σ(μ) has the following minimality property: each closed support <sup>S</sup> <sup>⊂</sup> <sup>R</sup> of <sup>μ</sup> satisfies σ(μ) ⊂ S. Therefore, one has the next corollary.

**Corollary 3.2.2.** Let μ be a finite Borel measure on R. Then σ(μ) is the smallest closed support of μ.

The set of growth points of μ will now be described by means of the Borel transform of μ.

**Theorem 3.2.3.** Let μ be a finite Borel measure on R and let F be its Borel transform. Then

$$\sigma(\mu) = \overline{\{x \in \mathbb{R} : 0 < \liminf\_{y \downarrow 0} \text{Im} \, F(x + iy)\}}.$$

Proof. With the notation

$$\mathfrak{N} = \left\{ x \in \mathbb{R} : 0 < \liminf\_{y \downarrow 0} \text{Im} \, F(x + iy) \right\},$$

it will be proved that σ(μ) = N. Recall first that, by Theorem 3.1.5 (i), the set

$$\mathfrak{M}' = \left\{ x \in \mathfrak{F} \, : \, 0 < \mathrm{Im} \, F(x + i0) \le \infty \right\}.$$

is a (minimal) support for μ. Since M- ⊂ N, it follows that N is also a support for μ. Hence, Lemma 3.2.1 (i) yields σ(μ) ⊂ N. For the inclusion N ⊂ σ(μ) it suffices to show N ⊂ σ(μ), since σ(μ) is closed; cf. Lemma 3.2.1 (ii). Assume that x ∈ σ(μ). Then there exists > 0 such that μ((x − , x + )) = 0 and it follows from

$$\operatorname{Im} F(x+iy) = \int\_{\mathbb{R}\backslash\langle x-\epsilon, x+\epsilon\rangle} \frac{y}{(t-x)^2 + y^2} \, d\mu(t)$$

that Im <sup>F</sup>(<sup>x</sup> <sup>+</sup> <sup>i</sup>0) = 0. This implies <sup>x</sup> <sup>∈</sup> <sup>N</sup> and hence <sup>N</sup> <sup>⊂</sup> <sup>σ</sup>(μ). -

Analogous to Theorem 3.2.3 there are also results for the parts of the finite Borel measure μ on R in its Lebesgue decomposition. In order to describe these results one needs the following notions of closure.

**Definition 3.2.4.** Let <sup>B</sup> <sup>⊂</sup> <sup>R</sup> be a Borel set. The absolutely continuous closure (or essential closure) of B is defined by

$$\text{closa}(\mathfrak{B}) := \left\{ x \in \mathbb{R} \, : \, m\left( (x - \epsilon, x + \epsilon) \cap \mathfrak{B} \right) > 0 \text{ for all } \epsilon > 0 \right\}.$$

The continuous closure of B is defined by

$$\text{clos}\_c(\mathfrak{B}) := \left\{ x \in \mathbb{R} \, : \, (x - \epsilon, x + \epsilon) \cap \mathfrak{B} \text{ is not countable for all } \epsilon > 0 \right\}.$$

In general, B is not a subset of closac(B) since, e.g., isolated points in B are not contained in closac(B). Moreover, if B ⊂ B and m(B- \ B) = 0, then closac(B) = closac(B- ).

**Lemma 3.2.5.** Let <sup>B</sup> <sup>⊂</sup> <sup>R</sup> be a Borel set. Then the sets closac(B) and closc(B) are both closed and

$$
\text{clos}\_{\text{ac}}(\mathfrak{B}) \subset \text{clos}\_{\text{c}}(\mathfrak{B}) \subset \mathfrak{B}.\tag{3.2.2}
$$

Moreover, the following statements hold:


Proof. First it will be shown that for any Borel set <sup>B</sup> <sup>⊂</sup> <sup>R</sup> both sets closac(B) and closc(B) are closed.

In order to show that closac(B) is closed, let x<sup>n</sup> ∈ closac(B) converge to <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Assume that <sup>x</sup> <sup>∈</sup> closac(B). Then there is > 0 such that

$$m\{(x-\epsilon, x+\epsilon)\cap\mathfrak{B}\}=0.\tag{3.2.3}$$

$$\square$$

For this there is <sup>n</sup><sup>0</sup> <sup>∈</sup> <sup>N</sup> and <sup>0</sup> <sup>&</sup>gt; 0 with (x<sup>n</sup><sup>0</sup> <sup>−</sup> 0, x<sup>n</sup><sup>0</sup> <sup>+</sup> 0) <sup>⊂</sup> (<sup>x</sup> <sup>−</sup> , x <sup>+</sup> ). One then concludes from (3.2.3) that m((x<sup>n</sup><sup>0</sup> − 0, x<sup>n</sup><sup>0</sup> + 0) ∩ B) = 0, a contradiction as x<sup>n</sup><sup>0</sup> ∈ closac(B). Therefore, x ∈ closac(B) and closac(B) is closed.

To show that closc(B) is closed, let <sup>x</sup><sup>n</sup> <sup>∈</sup> closc(B) converge to <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Assume that x ∈ closc(B). Then there is > 0 such that the set (x − , x + ) ∩ B is countable. For this there exist <sup>n</sup><sup>0</sup> <sup>∈</sup> <sup>N</sup> and <sup>0</sup> <sup>&</sup>gt; 0 with

$$(x\_{n\_0} - \epsilon\_0, x\_{n\_0} + \epsilon\_0) \subset (x - \epsilon\_0, x + \epsilon\_0),$$

so that

$$\left( (x\_{n\_0} - \epsilon\_0, x\_{n\_0} + \epsilon\_0) \cap \mathfrak{B} \right) \subset \left( (x - \epsilon, x + \epsilon) \cap \mathfrak{B} \right)$$

is countable, a contradiction, as xn<sup>0</sup> ∈ closc(B). Therefore, x ∈ closc(B) and closc(B) is closed.

To see the first inclusion in (3.2.2) assume that x ∈ closac(B). Then one has m((x−, x+)∩B) > 0 for all > 0 and hence for all > 0 the set (x−, x+)∩B is not countable. This implies closac(B) ⊂ closc(B). Likewise, to see the second inclusion assume that x ∈ closc(B) and that x ∈ B. Then there is > 0 such that (x − , x + ) ∩ B = ∅, a contradiction. Hence, closc(B) ⊂ B.

(i) (⇒) Assume that closac(B) = <sup>∅</sup>. This implies that for all <sup>x</sup> <sup>∈</sup> <sup>R</sup> there exists <sup>x</sup> > 0 such that m((x−x, x+x)∩B) = 0. First assume that B is compact. Then for all x ∈ B the open sets (x − x, x + x) form an open cover for B. Therefore, there exists a finite subcover (x<sup>i</sup> − i, x<sup>i</sup> + i) of B such that

$$\mathfrak{B} \subset \bigcup\_{i=1}^{n} \left( x\_i - \epsilon\_i, x\_i + \epsilon\_i \right) \cap \mathfrak{B},$$

and hence

$$m(\mathfrak{B}) \le \sum\_{i=1}^{n} m\left( (x\_i - \epsilon\_i, x\_i + \epsilon\_i) \cap \mathfrak{B} \right) = 0.$$

For arbitrary Borel sets B the (inner) regularity of the Lebesgue measure implies m(B) = 0.

(⇐) If <sup>m</sup>(B) = 0, then <sup>m</sup>((<sup>x</sup> <sup>−</sup> , x <sup>+</sup> ) <sup>∩</sup> <sup>B</sup>) = 0 for all <sup>x</sup> <sup>∈</sup> <sup>R</sup> and all > 0. Therefore, closac(B) = ∅.

(ii) (⇒) Assume that closc(B) = <sup>∅</sup>. This implies that for all <sup>x</sup> <sup>∈</sup> <sup>R</sup> there exists <sup>x</sup> > 0 such that (x − x, x + x) ∩ B is countable; in particular, this holds for all rational xi. The countable many open sets (x<sup>i</sup> − x<sup>i</sup> , x<sup>i</sup> + x<sup>i</sup> ) form an open cover for B and this implies that B is countable.

(⇐) If <sup>B</sup> is countable, then (<sup>x</sup> <sup>−</sup> , x <sup>+</sup> ) <sup>∩</sup> <sup>B</sup> is countable for all <sup>x</sup> <sup>∈</sup> <sup>R</sup> and all > 0. Therefore, closc(B) = <sup>∅</sup>. -

Here is the promised treatment of the absolutely continuous, singular, and singular continuous parts of the Borel measure μ.

**Theorem 3.2.6.** Let μ be a finite Borel measure on R and let F be its Borel transform. Then the following statements hold:

$$\text{(i)}\ \sigma(\mu\_{\text{ac}}) = \text{clos}\_{\text{ac}}\left(\{x \in \mathfrak{F} : 0 < \text{Im}\, F(x + i0) < \infty\}\right);$$

(ii) σ(μs) ⊂ {x ∈ F : Im F(x + i0) = ∞};

(iii) σ(μsc) ⊂ clos<sup>c</sup> {x ∈ F : Im F(x + i0) = ∞, lim<sup>y</sup>↓<sup>0</sup> yF(x + iy)=0} .

Proof. (i) Let

$$\mathfrak{M}'\_{\mathrm{nc}} := \left\{ x \in \mathfrak{F} : 0 < \mathrm{Im} \, F(x + i0) < \infty \right\},$$

and note that M- ac is a Borel set. It is claimed that

$$
\sigma(\mu\_{\rm ac}) = \text{clos}\_{\rm ac}(\mathfrak{M}'\_{\rm ac}).\tag{3.2.4}
$$

To verify the inclusion (⊂) in (3.2.4), assume that x ∈ closac(M- ac). Then there exists > 0 such that

$$m\left( (x - \epsilon, x + \epsilon) \cap \mathfrak{M}'\_{\mathrm{ac}} \right) = 0.$$

As μac is absolutely continuous with respect to the Lebesgue measure m, also

$$
\mu\_{\rm ac} \left( (x - \epsilon, x + \epsilon) \cap \mathfrak{M}'\_{\rm ac} \right) = 0. \tag{3.2.5}
$$

Furthermore, by Theorem 3.1.5 (ii), the set M- ac is a minimal support for μac and, in particular, <sup>μ</sup>ac(<sup>R</sup> \ <sup>M</sup>- ac) = 0. Hence,

$$
\mu\_{\rm ac} \{ (x - \epsilon, x + \epsilon) \mid \mathfrak{M}'\_{\rm ac} \} = 0 \tag{3.2.6}
$$

and from (3.2.5)–(3.2.6) one obtains μac((x − , x + )) = 0. Hence, x ∈ σ(μac). Thus, the inclusion (⊂) in (3.2.4) has been shown.

For the converse inclusion (⊃), let x ∈ σ(μac). Then there exists > 0 such that

$$0 = \mu\_{\text{ac}}\left((x-\epsilon, x+\epsilon)\right) = \int\_{(x-\epsilon, x+\epsilon)} (D\mu)(t) \, dm(t),$$

where in the last equality the Radon–Nikod´ym theorem was used; cf. (3.1.3) and note that ν- <sup>μ</sup> = μ- = Dμ m-almost everywhere. Due to Theorem 3.1.4 and the fact that Im F(t + i0) ≥ 0 for all t ∈ F, one concludes that

$$\begin{aligned} 0 &= \frac{1}{\pi} \int\_{(x-\epsilon, x+\epsilon)} \text{Im} \, F(t+i0) \, dm(t) \\ &= \frac{1}{\pi} \int\_{(x-\epsilon, x+\epsilon)\cap\mathfrak{M}\_{\text{nuc}}^{\prime}} \text{Im} \, F(t+i0) \, dm(t) .\end{aligned}$$

This implies m((x−, x+)∩M- ac) = 0 since Im F(t+i0) is positive on M- ac. Hence, x ∈ closac(M- ac). Thus, the inclusion (⊃) in (3.2.4) has been shown. Therefore, the equality (3.2.4) has been established, which gives the assertion (i).

(ii) According to Theorem 3.1.5 (iii) the set {x ∈ F : Im F(x + i0) = ∞} is a minimal support for the singular part μ<sup>s</sup> of μ. Since σ(μs) is contained in the closure of this set by Lemma 3.2.1 (i), the assertion follows.

(iii) By Theorem 3.1.5 (iv) and (3.1.13), the Borel set

$$\mathfrak{M}'\_{\mathrm{sc}} := \left\{ x \in \mathfrak{F} : \mathrm{Im} \, F(x + i0) = \infty, \, \lim\_{y \downarrow 0} y F(x + iy) = 0 \right\}$$

is a minimal support for <sup>μ</sup>sc and hence, in particular, <sup>μ</sup>sc(<sup>R</sup> \ <sup>M</sup>- sc) = 0. Let closc(M- sc) be the continuous closure of M- sc, which is a Borel set, as it is closed; cf. Lemma 3.2.5. It will be shown that closc(M- sc) is a support for μsc, that is,

$$
\mu\_{\rm sc} \left( \mathbb{R} \mid \text{clos}\_{\rm c} (\mathfrak{M}'\_{\rm sc}) \right) = 0,\tag{3.2.7}
$$

since this implies that σ(μsc) ⊂ closc(M- sc); cf. Lemma 3.2.1 (i) and Lemma 3.2.5.

In fact, for <sup>x</sup> <sup>∈</sup> <sup>R</sup> \ closc(M- sc) by definition there exists > 0 such that (x − , x + ) ∩ M- sc is countable; thus μsc((x − , x + ) ∩ M- sc) = 0, as μsc is continuous. Consequently,

$$
\mu\_{\rm sc}((x-\epsilon, x+\epsilon)) \le \mu\_{\rm sc}((x-\epsilon, x+\epsilon) \cap \mathfrak{M}'\_{\rm sc}) + \mu\_{\rm sc}(\mathbb{R} \nmid \mathfrak{M}'\_{\rm sc}) = 0.
$$

This yields <sup>μ</sup>sc(K) = 0 for each compact set <sup>K</sup> <sup>⊂</sup> <sup>R</sup> \ closc(M- sc) and hence, by the (inner) regularity of the finite measure μsc, (3.2.7) follows. -

## **3.3 Spectra of self-adjoint relations**

The spectrum of a self-adjoint relation or operator in a Hilbert space will be studied in terms of its spectral measure. In particular, a division of the spectrum into absolutely continuous and singular spectra will be introduced based on the Lebesgue decomposition of a finite Borel measure; cf. Section 3.1.

Let <sup>A</sup> be a self-adjoint relation in the Hilbert space <sup>H</sup>. Then <sup>σ</sup>(A) <sup>⊂</sup> <sup>R</sup> by Theorem 1.5.5 and σr(A) = ∅, and hence σ(A) = σp(A) ∪ σc(A); cf. Proposition 1.4.4. The spectral measure E(·) of A satisfies

$$(A - \lambda)^{-1} = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, dE(t), \quad \lambda \in \mathbb{C} \nmid \mathbb{R},$$

cf. (1.5.6). First the parts σp(A) and σc(A) of the spectrum σ(A) will be characterized in terms of the spectral measure E(·). These results will play an important role in the further development; cf. Section 3.5 and Section 3.6. The facts in Proposition 3.3.1 are immediate consequences of the orthogonal decomposition

$$\mathfrak{H} = \mathfrak{H}\_{\mathrm{op}} \oplus \mathfrak{H}\_{\mathrm{mul}}, \quad A = A\_{\mathrm{op}} \stackrel{\cdot}{\oplus} A\_{\mathrm{mul}}, \tag{3.3.1}$$

where Hop = dom A and Hmul = mul A, of the self-adjoint relation A (see Theorem 1.5.1) and the properties of the spectral measure of Aop.

**Proposition 3.3.1.** Let A be a self-adjoint relation in H with spectral measure E(·). Then the following statements hold:


$$\mathfrak{N}\_{\lambda}(A) = \left\{ \{ E(\{\lambda\})h, \lambda E(\{\lambda\})h \} : h \in \mathfrak{H} \right\};$$

(iii) λ ∈ σc(A) if and only if E({λ})=0 and E((λ − , λ + )) = 0 for all > 0.

A further subdivision of the spectrum will be introduced analogous to the Lebesgue decomposition of a finite Borel measure on R; cf. Section 3.1. This requires another description of the spectrum via the introduction of a collection of finite Borel measures induced by the spectral function. Let A be a self-adjoint relation in H with spectral measure E(·). For each h ∈ H, define μ<sup>h</sup> by

$$
\mu\_h = (E(\cdot)h, h) = \left(E\_{\rm op}(\cdot)P\_{\rm op}h, P\_{\rm op}h\right), \tag{3.3.2}
$$

so that <sup>μ</sup><sup>h</sup> is a regular Borel measure on <sup>R</sup>. Note that <sup>μ</sup><sup>h</sup> = 0 for <sup>h</sup> <sup>∈</sup> <sup>H</sup>mul . The set of growth points σ(μh) of μ<sup>h</sup> is given by

$$\sigma(\mu\_h) = \left\{ x \in \mathbb{R} \, : \, \mu\_h((x-\epsilon, x+\epsilon)) > 0 \text{ for all } \epsilon > 0 \right\}.$$

It will be shown that the spectrum of A is made up of the growth points of μ<sup>h</sup> for a dense set of elements h ∈ H. Furthermore, the statement in the following proposition is in a local sense, namely, it concerns the spectrum of A relative to an open interval Δ <sup>⊂</sup> <sup>R</sup>; cf. Definition 3.4.9.

**Proposition 3.3.2.** Let <sup>A</sup> be a self-adjoint relation in <sup>H</sup>, let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval, and assume that D<sup>Δ</sup> is a subset of the closed subspace E(Δ)H such that

$$
\overline{\operatorname{span}} \mathcal{D}\_{\Delta} = E(\Delta) \mathfrak{H}.
$$

Then the following identities hold:

$$\overline{\sigma(A)\cap\Delta} = \overline{\bigcup\_{h\in E(\Delta)\,\mathfrak{H}} \sigma(\mu\_h)} = \overline{\bigcup\_{h\in \mathcal{D}\_\Delta} \sigma(\mu\_h)}.\tag{3.3.3}$$

Proof. First it will be shown that

$$\overline{\sigma(A)\cap\Delta} \supset \overline{\bigcup\_{h\in E(\Delta)\mathfrak{H}} \sigma(\mu\_h)} \supset \overline{\bigcup\_{h\in \mathcal{D}\_\Delta} \sigma(\mu\_h)}.\tag{3.3.4}$$

For this purpose assume that x /∈ σ(A) ∩ Δ. Then there exists > 0 such that (x − , x + ) ∩ Δ contains no spectrum of A. By Proposition 3.3.1 (i), this yields

$$E\left( (x - \epsilon, x + \epsilon) \cap \Delta \right) = 0$$

and for h ∈ E(Δ)H one obtains

$$\begin{aligned} \mu\_h \{ (x - \epsilon, x + \epsilon) \} &= \{ E \{ (x - \epsilon, x + \epsilon) \} h, h \} \\ &= \{ E \{ (x - \epsilon, x + \epsilon) \} E (\Delta) h, h \} \\ &= \{ E \{ (x - \epsilon, x + \epsilon) \cap \Delta \} h, h \} \\ &= 0. \end{aligned}$$

Therefore, (x − , x + ) ∩ σ(μh) = ∅ for all h ∈ E(Δ)H, and thus

$$x \notin \overline{\bigcup\_{h \in E(\Delta) \mathfrak{H}} \sigma(\mu\_h)}.$$

Hence, the inclusions (3.3.4) follow. Next it will be shown that

$$\overline{\bigcup\_{h \in \mathcal{D}\_{\Delta}} \sigma(\mu\_h)} \supset \overline{\sigma(A) \cap \Delta},$$

which, together with (3.3.4), yields (3.3.3). For this purpose, assume that

$$x \notin \overline{\bigcup\_{h \in \mathcal{D}\_{\Delta}} \sigma(\mu\_h)}.$$

Then there exists > 0 such that (x−, x+) <sup>⊂</sup> <sup>R</sup> \ <sup>σ</sup>(μh) for all <sup>h</sup> <sup>∈</sup> <sup>D</sup>Δ, that is,

$$\left\| \left| E\{ (x - \epsilon, x + \epsilon) \} h \right\| \right\|^2 = \mu\_h \left( (x - \epsilon, x + \epsilon) \right) = 0 \tag{3.3.5}$$

for all h ∈ DΔ, and hence for all h ∈ span DΔ. Since by assumption span D<sup>Δ</sup> is dense in E(Δ)H, it follows that (3.3.5) holds for all h ∈ E(Δ)H and hence again by Proposition 3.3.1 (i),

$$E\left( (x - \epsilon, x + \epsilon) \cap \Delta \right) h = E\left( (x - \epsilon, x + \epsilon) \right) E(\Delta) h = 0$$

for all h ∈ H. This shows that (x − , x + ) ∩ Δ does not contain spectrum of A, in particular, x /<sup>∈</sup> <sup>σ</sup>(A) <sup>∩</sup> Δ. -

The collection of Borel measures μh, h ∈ H, as defined in (3.3.2), is now used to introduce a number of subspaces of H.

**Definition 3.3.3.** Let A be a self-adjoint relation in H. The pure point subspace, the absolutely continuous subspace, and the singular continuous subspace corresponding to Aop are defined by

$$\begin{aligned} \mathfrak{H}\_{\mathrm{p}}(A\_{\mathrm{op}}) &= \{ h \in \mathfrak{H} : \mu\_{h} \text{ is pure point} \}, \\ \mathfrak{H}\_{\mathrm{ac}}(A\_{\mathrm{op}}) &= \{ h \in \mathfrak{H} : \mu\_{h} \text{ is absolutely continuous} \}, \\ \mathfrak{H}\_{\mathrm{sc}}(A\_{\mathrm{op}}) &= \{ h \in \mathfrak{H} : \mu\_{h} \text{ is singular continuous} \}, \end{aligned}$$

respectively.

In conjunction with the orthogonal decomposition (3.3.1), these subspaces span the original Hilbert space and lead to invariant parts of the self-adjoint relation, see, e.g., [649, Theorem VII.4] or [691, Proposition 9.3].

**Theorem 3.3.4.** Let A be a self-adjoint relation in H. Then Hp(Aop), Hac(Aop), and Hsc(Aop) are mutually orthogonal closed subspaces of H and

$$\mathfrak{H} = \mathfrak{H}\_{\mathrm{p}}(A\_{\mathrm{op}}) \oplus \mathfrak{H}\_{\mathrm{ac}}(A\_{\mathrm{op}}) \oplus \mathfrak{H}\_{\mathrm{sc}}(A\_{\mathrm{op}}) \oplus \mathfrak{H}\_{\mathrm{mul}}.$$

Each of the Hilbert spaces Hp(Aop), Hac(Aop), and Hsc(Aop) is invariant for the operator Aop, and the restrictions

$$\begin{aligned} A\_{\mathrm{op}}^{\mathrm{P}} &= A\_{\mathrm{op}} \restriction \mathfrak{H}\_{\mathrm{P}}(A\_{\mathrm{op}}), \\ A\_{\mathrm{op}}^{\mathrm{ac}} &= A\_{\mathrm{op}} \restriction \mathfrak{H}\_{\mathrm{ac}}(A\_{\mathrm{op}}), \\ A\_{\mathrm{op}}^{\mathrm{sc}} &= A\_{\mathrm{op}} \restriction \mathfrak{H}\_{\mathrm{sc}}(A\_{\mathrm{op}}), \end{aligned}$$

are self-adjoint operators in Hp(Aop), Hac(Aop), and Hsc(Aop), respectively.

By means of these subspaces one defines, in analogy with the case of finite Borel measures, the singular subspace and the continuous subspace corresponding to Aop by

$$\mathfrak{H}\_{\sf s}(A\_{\rm op}) = \mathfrak{H}\_{\sf p}(A\_{\rm op}) \oplus \mathfrak{H}\_{\sf sc}(A\_{\rm op}) \quad \text{and} \quad \mathfrak{H}\_{\sf c}(A\_{\rm op}) = \mathfrak{H}\_{\sf ac}(A\_{\rm op}) \oplus \mathfrak{H}\_{\sf sc}(A\_{\rm op}),$$

respectively. The restrictions of Aop to these subspaces are denoted by A<sup>s</sup> op and Ac op, respectively, and it follows that

$$A^{\rm s}\_{\rm op} = A^{\rm p}\_{\rm op} \widehat{\oplus} A^{\rm sc}\_{\rm op} \quad \text{and} \quad A^{\rm c}\_{\rm op} = A^{\rm nc}\_{\rm op} \widehat{\oplus} A^{\rm sc}\_{\rm op}.$$

**Definition 3.3.5.** Let A be a self-adjoint relation in H. The absolutely continuous spectrum σac(A), the singular continuous spectrum σsc(A), and the singular spectrum σs(A) of A are defined by

$$
\sigma\_{\rm ac}(A) = \sigma\left(A\_{\rm op}^{\rm ac}\right), \quad \sigma\_{\rm sc}(A) = \sigma\left(A\_{\rm op}^{\rm sc}\right), \quad \text{and} \quad \sigma\_{\rm s}(A) = \sigma\left(A\_{\rm op}^{\rm s}\right),
$$

respectively.

Note that for the pure point part A<sup>p</sup> op one only has σp(A) = σ(A<sup>p</sup> op). The spectral measures of the self-adjoint operators Aac op, Asc op, and A<sup>s</sup> op in the Hilbert spaces Hac(Aop), Hsc(Aop), and Hs(Aop), are given by the corresponding restrictions of the spectral measure E(·) of A. These spectral measures will be denoted by Eac(·), Esc(·), and Es(·), respectively.

The following corollary relates the absolutely continuous, singular continuous, and singular spectrum of A in an open interval Δ with the growth points of the absolutely continuous, singular continuous, and singular parts of the measures μh.

**Corollary 3.3.6.** Let <sup>A</sup> be a self-adjoint relation in <sup>H</sup>, let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval, and assume that D<sup>Δ</sup> is a subset of the closed subspace E(Δ)H such that

$$
\overline{\text{span}} \, \mathcal{D}\_{\Delta} = E(\Delta) \mathfrak{H}.
$$

Denote by μh,ac, μh,sc, and μh,<sup>s</sup> the absolutely continuous, singular continuous, and singular part in the Lebesgue decomposition of the Borel measure μ<sup>h</sup> in (3.3.2). Then the following identity holds:

$$\overline{\sigma\_i(A) \cap \Delta} = \overline{\bigcup\_{h \in \mathcal{D}\_{\Delta}} \sigma(\mu\_{h,i})}, \qquad i = \text{ac, sc, s.}$$

Proof. Observe first that the absolutely continuous, singular continuous, and singular part of the Borel measure μh, h ∈ H, are given by

$$
\mu\_{h, \text{ac}} = \mu\_{P\_{\text{ac}}h}, \quad \mu\_{h, \text{sc}} = \mu\_{P\_{\text{sc}}h}, \quad \text{and} \quad \mu\_{h, \text{s}} = \mu\_{P\_{\text{s}}h}, \tag{3.3.6}
$$

respectively, where P<sup>i</sup> denote the orthogonal projections onto the corresponding Hilbert spaces Hi(Aop), i = ac, sc, s. This follows from the uniqueness of the Lebesgue decomposition and Theorem 3.3.4. If μ<sup>i</sup> <sup>h</sup><sup>i</sup> = (Ei(·)hi, hi), h<sup>i</sup> ∈ Hi(Aop), is the Borel measure defined with the help of the spectral measures <sup>E</sup>i(·) of <sup>A</sup><sup>i</sup> op, i = ac, sc, s, then Definition 3.3.5, Proposition 3.3.2 and (3.3.6) yield

$$\overline{\sigma\_i(A) \cap \Delta} = \overline{\bigcup\_{h\_i \in P\_i \mathcal{D}\_\Delta} \sigma(\mu\_{h\_i}^i)} = \overline{\bigcup\_{h \in \mathcal{D}\_\Delta} \sigma(\mu\_{P\_i h})} = \overline{\bigcup\_{h \in \mathcal{D}\_\Delta} \sigma(\mu\_{h,i})}$$

for i = ac, sc, s. Here it was also used that the linear span of the set PiD<sup>Δ</sup> is dense in Ei(Δ)Hi(Aop) = PiE(Δ)H. -

**Example 3.3.7.** Let μ be a Borel measure on R and consider the maximal multiplication operator by the independent variable in L<sup>2</sup> <sup>μ</sup>(R), given by

$$(Af)(t) = tf(t), \quad \text{dom}\, A = \left\{ f \in L^2\_{\mu}(\mathbb{R}) : t \mapsto tf(t) \in L^2\_{\mu}(\mathbb{R}) \right\}.$$

The operator A is self-adjoint in L<sup>2</sup> <sup>μ</sup>(R) and for every Borel set <sup>B</sup> <sup>⊂</sup> <sup>R</sup> the spectral measure of A is given by

$$E(\mathfrak{B})h = \chi\_{\mathfrak{B}}h, \qquad h \in L^2\_{\mu}(\mathbb{R}),$$

where <sup>χ</sup><sup>B</sup> denotes the characteristic function of <sup>B</sup>. For <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>μ</sup>(R) the Borel measure in (3.3.2) satisfies

$$\mu\_h(\mathfrak{B}) = (E(\mathfrak{B})h, h)\_{L^2\_{\mu}(\mathbb{R})} = \int\_{\mathfrak{B}} |h(t)|^2 \, d\mu(t)$$

for all Borel sets <sup>B</sup> <sup>⊂</sup> <sup>R</sup>. It is not difficult to check that <sup>σ</sup>(A) = <sup>σ</sup>(μ). Furthermore, the Lebesgue decomposition μ = μac + μs, where μ<sup>s</sup> = μsc + μp, gives rise to the orthogonal decompositions

$$L^2\_{\mu}(\mathbb{R}) = L^2\_{\mu\_{\rm nc}}(\mathbb{R}) \oplus L^2\_{\mu\_{\rm s}}(\mathbb{R}) \quad \text{and} \quad L^2\_{\mu\_{\rm s}}(\mathbb{R}) = L^2\_{\mu\_{\rm sc}}(\mathbb{R}) \oplus L^2\_{\mu\_{\rm p}}(\mathbb{R}).$$

For the spectral subspaces of A in Definition 3.3.3 one has Hi(A) = L<sup>2</sup> <sup>μ</sup><sup>i</sup> (R), i = ac, sc, s, and this implies

$$
\sigma\_{\rm ac}(A) = \sigma(\mu\_{\rm ac}), \quad \sigma\_{\rm sc}(A) = \sigma(\mu\_{\rm sc}), \quad \text{and} \quad \sigma\_{\rm s}(A) = \sigma(\mu\_{\rm s}).
$$

## **3.4 Simple symmetric operators**

It will be shown that any closed symmetric relation in a Hilbert space can be decomposed into the orthogonal componentwise sum of a closed simple, i.e., completely non-self-adjoint, symmetric operator, and a self-adjoint relation. Criteria for the absence of the self-adjoint relation in this decomposition will be given, and a local version of simplicity will be studied.

First some attention is paid to the notions of invariance and reduction. These notions appeared already in the self-adjoint case in the previous section when subdividing the spectrum, and are also important in the description of self-adjoint extensions of symmetric relations. Let S be a closed symmetric relation in the Hilbert space H. Decompose H as H = H- ⊕ H--, let P and P- be the orthogonal projections onto H and H--, and define

$$
\dot{P}'\{f,g\} = \{P'f, P'g\} \quad \text{and} \quad \dot{P}''\{f,g\} = \{P''f, P''g\}, \quad f, g \in \mathfrak{H}.
$$

The closed symmetric relation S gives rise to the restrictions

$$S' = S \cap (\mathfrak{H}')^2 \subset \widehat{P}'S \quad \text{and} \quad S'' = S \cap (\mathfrak{H}'')^2 \subset \widehat{P}''S,\tag{3.4.1}$$

which are closed symmetric relations and

$$S' \oplus S'' \subset S. \tag{3.4.2}$$

In order to describe when S and S- span S the following notions are useful. The subspaces H and H- are called invariant under the symmetric relation S if S- <sup>=</sup> <sup>P</sup>- S or S-- <sup>=</sup> <sup>P</sup>--S, respectively. Clearly, the spaces H or H- are invariant under S if

$$
\bar{P}'S \subset S \quad \text{or} \quad \bar{P}''S \subset S,
$$

respectively. In the next lemma it turns out that H is invariant under S if and only if H- is invariant under S; in which case S and S- can be orthogonally split off from S, i.e., S = S- <sup>⊕</sup> <sup>S</sup>--.

**Lemma 3.4.1.** Let S be a closed symmetric relation in H = H- ⊕ H- and let S and S-be as in (3.4.1). Then the following statements hold:


Assume, in addition, that S is self-adjoint. Then

(iii) S- <sup>=</sup> <sup>P</sup>- S or, equivalently, S-- <sup>=</sup> <sup>P</sup>--S implies that S and S- are self-adjoint in H and H--, respectively.

Proof. (i) Assume that S- <sup>=</sup> <sup>P</sup>- S. Since S- <sup>⊕</sup> <sup>S</sup>-- ⊂ S by (3.4.2), it suffices to show that S ⊂ S- <sup>⊕</sup> <sup>S</sup>--. Let {f,f- } ∈ S and decompose {f,f- } with respect to H = H- ⊕ H-as

$$\{f, f'\} = \{h, h'\} + \{k, k'\}, \quad h, h' \in \mathfrak{H}', \quad k, k' \in \mathfrak{H}''.$$

Then {h, h- } ∈ <sup>P</sup>- S = S- ⊂ S and therefore {k, k- } ∈ S ∩ (H--)<sup>2</sup> = S--. Hence, S = S- <sup>⊕</sup> <sup>S</sup>--, which implies that S-- <sup>=</sup> <sup>P</sup>--S.

(ii) Assume that S is self-adjoint in H- . To show that <sup>P</sup>- S ⊂ S- , let {f,f- } ∈ S and consider {P- f,P- f- } ∈ <sup>P</sup>- S. Since S is symmetric it follows for all {h, h- } ∈ S- ⊂ S that

$$(P'f',h)\_{\mathfrak{H}'} - (P'f,h')\_{\mathfrak{H}'} = (f',h)\_{\mathfrak{H}} - (f,h')\_{\mathfrak{H}} = 0.$$

The assumption that S is self-adjoint in H implies {P- f,P- f- } ∈ S- . Therefore, P- S ⊂ S- . This implies S- <sup>=</sup> <sup>P</sup>- S and (i) yields S-- <sup>=</sup> <sup>P</sup>--S.

(iii) According to (i), either of the conditions S- <sup>=</sup> <sup>P</sup>- S or S-- <sup>=</sup> <sup>P</sup>--S implies that S = S- <sup>⊕</sup> <sup>S</sup>--. Since S is self-adjoint, this shows that S is self-adjoint in H and that S- is self-adjoint in H--. -

Before introducing the notion of simplicity in Definition 3.4.3 below, the following lemma on symmetric and self-adjoint extensions of symmetric relations that contain a self-adjoint part is discussed.

**Lemma 3.4.2.** Let S be a closed symmetric relation in H whose defect numbers are not necessarily equal and assume that there are orthogonal decompositions

$$\mathfrak{H} = \mathfrak{H}' \oplus \mathfrak{H}'', \quad S = S' \oplus S'',\tag{3.4.3}$$

such that S is closed and symmetric in H and S- is self-adjoint in H--. Then every closed symmetric (self-adjoint) extension A of S in H admits the decomposition

$$A = A' \oplus S'',$$

where A is a closed symmetric (self-adjoint) extension of S in H- .

Proof. Observe that the inclusion S ⊂ A and the decomposition (3.4.3) imply that

$$S'' = S \cap (\mathfrak{H}'')^2 \subset A \cap (\mathfrak{H}'')^2.$$

Therefore, the assumption that S- is self-adjoint in H- shows that the closed symmetric relation A∩(H--)<sup>2</sup> is actually self-adjoint in H- and that S-- = A∩(H--)2. Hence, by Lemma 3.4.1 (i)–(ii) the relation A decomposes as A = A- <sup>⊕</sup> <sup>S</sup>--, where A- = A ∩ (H- )<sup>2</sup> is a symmetric extension of S in H- . Therefore,

$$S' \oplus S'' \subset A' \oplus S''.$$

If A is self-adjoint in H, then Lemma 3.4.1 (iii) implies that A- = A ∩ (H- )<sup>2</sup> is a self-adjoint extension of S in H- . This completes the proof. - The notion of simplicity or complete non-self-adjointness is defined next.

**Definition 3.4.3.** Let S be a closed symmetric relation in H whose defect numbers are not necessarily equal. Then S is simple if there is no orthogonal decomposition

$$S = S' \oplus S'', \quad \text{where} \quad \mathfrak{H} = \mathfrak{H}' \oplus \mathfrak{H}'',\tag{3.4.4}$$

such that H-- = {0} and S- is self-adjoint in H--.

Every closed symmetric relation S in H has the orthogonal componentwise decomposition <sup>S</sup> <sup>=</sup> <sup>S</sup>op <sup>⊕</sup> <sup>S</sup>mul , where <sup>S</sup>mul is a purely multivalued self-adjoint relation in the closed subspace Hmul = mul S; cf. Theorem 1.4.11. Hence, a closed simple symmetric relation is necessarily an operator. A similar argument shows that a closed simple symmetric relation does not have any eigenvalues; cf. Lemma 3.4.7.

Any closed symmetric relation S in H has a decomposition as in (3.4.4), where S is simple in H and S- is self-adjoint in H--. To see this, define the closed subspace R ⊂ H by

$$\mathfrak{R} := \bigcap\_{\lambda \in C \backslash \mathbb{R}} \text{ran}\left( S - \lambda \right), \tag{3.4.5}$$

and the closed subspace K = R⊥, so that

$$\mathcal{R} = \overline{\text{span}}\left\{ \mathfrak{N}\_{\lambda}(S^\*) : \lambda \in \mathbb{C} \mid \mathbb{R} \right\}, \quad \mathfrak{N}\_{\lambda}(S^\*) = \ker(S^\* - \lambda). \tag{3.4.6}$$

It follows from Lemma 1.6.11 that the set <sup>C</sup> \ <sup>R</sup> in (3.4.6), and hence in the intersection in (3.4.5), can be replaced by any subset of <sup>C</sup> \ <sup>R</sup> which has an accumulation point in C<sup>+</sup> and an accumulation point in C−.

**Theorem 3.4.4.** Let S be a closed symmetric relation in H whose defect numbers are not necessarily equal. Let H be decomposed as H = K ⊕ R, where the closed subspaces K and R are defined as in (3.4.5) and (3.4.6), and denote

$$S'=S\cap \mathfrak{K}^2 \quad \text{and} \quad S''=S\cap \mathfrak{R}^2. \tag{3.4.7}$$

Then the relation S admits the orthogonal decomposition

$$S = S' \oplus S'',\tag{3.4.8}$$

where S is a closed simple symmetric operator in K and S- is a self-adjoint relation in R.

Proof. Step 1. First it will be shown that R satisfies the following invariance property: for any <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>

$$(S - \lambda\_0)^{-1} \Re \subset \Re. \tag{3.4.9}$$

To see this, let h ∈ R and h- = (<sup>S</sup> <sup>−</sup> <sup>λ</sup>0)−1h. Hence, {h- , h + λ0h- } ∈ S and thus

$$(h + \lambda\_0 h', f\_{\overline{\lambda}}) = (h', \overline{\lambda} f\_{\overline{\lambda}}), \quad \{f\_{\overline{\lambda}}, \overline{\lambda} f\_{\overline{\lambda}}\} \in S^\*, \ \lambda \in \mathbb{C} \mid \mathbb{R}.$$

Since <sup>h</sup> <sup>∈</sup> ran (<sup>S</sup> <sup>−</sup> <sup>λ</sup>) = (ker (S<sup>∗</sup> <sup>−</sup> <sup>λ</sup>))⊥, <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and <sup>f</sup><sup>λ</sup> <sup>∈</sup> ker (S<sup>∗</sup> <sup>−</sup> <sup>λ</sup>), this implies

$$0 = (h, f\_{\overline{\lambda}}) = (\lambda - \lambda\_0)(h', f\_{\overline{\lambda}}),$$

that is, h- <sup>⊥</sup> <sup>f</sup><sup>λ</sup> for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, <sup>λ</sup> <sup>=</sup> <sup>λ</sup>0. Hence, <sup>h</sup>- <sup>∈</sup> ran (<sup>S</sup> <sup>−</sup>λ) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, λ = λ0, and it follows from Lemma 1.6.11 that

$$h' \in \bigcap\_{\lambda \in \mathbb{C} \backslash \mathbb{R}, \, \lambda \neq \lambda\_0} \text{ran}\left(S - \lambda\right) = \bigcap\_{\lambda \in \mathbb{C} \backslash \mathbb{R}} \text{ran}\left(S - \lambda\right) \, = \Re,$$

which proves the inclusion in (3.4.9).

Step 2. Next it will be shown that the relation <sup>S</sup> <sup>∩</sup> <sup>R</sup><sup>2</sup> is self-adjoint. Fix some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and define the relation <sup>S</sup>-first by

$$S^{\prime\prime} = \left\{ \left\{ (S - \lambda\_0)^{-1} h, (I + \lambda\_0 (S - \lambda\_0)^{-1}) h \right\} : h \in \mathfrak{R} \right\}.\tag{3.4.10}$$

It follows from (3.4.5) that R ⊂ ran (S − λ0), and hence Lemma 1.1.8 implies S-- ⊂ S, so that in particular S- is symmetric in H. It follows from (3.4.9) that S-- <sup>⊂</sup> <sup>R</sup>2. Therefore, <sup>S</sup>-- <sup>⊂</sup> <sup>S</sup> <sup>∩</sup> <sup>R</sup>2. Next <sup>S</sup> <sup>∩</sup> <sup>R</sup><sup>2</sup> <sup>⊂</sup> <sup>S</sup>- will be verified. Let {f,f- } ∈ <sup>S</sup> <sup>∩</sup> <sup>R</sup>2, so that by Lemma 1.1.8

$$\{f, f'\} = \{ (S - \lambda\_0)^{-1} h, (I + \lambda\_0 (S - \lambda\_0)^{-1}) h\}$$

for some h ∈ ran (S − λ0). Since {f,f- } ∈ <sup>R</sup>2, it follows that

$$((S - \lambda\_0)^{-1}h \in \mathfrak{R} \quad \text{and} \quad (I + \lambda\_0(S - \lambda\_0)^{-1})h \in \mathfrak{R}.$$

Therefore, h ∈ R and hence {f,f- } ∈ S--, so that <sup>S</sup> <sup>∩</sup> <sup>R</sup><sup>2</sup> <sup>⊂</sup> <sup>S</sup>--. This leads to the equality S-- <sup>=</sup> <sup>S</sup> <sup>∩</sup> <sup>R</sup><sup>2</sup> in (3.4.7); in particular, <sup>S</sup>- in (3.4.10) does not depend on the choice of <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

From S-- ⊂ S it follows that S- is symmetric and from (3.4.10) one obtains that ran (S-- −λ0) = R. Since S- is independent of the choice of λ0, it follows that ran (S-- <sup>−</sup> <sup>λ</sup>) = <sup>R</sup> holds for every <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Hence, <sup>S</sup>-- <sup>=</sup> <sup>S</sup> <sup>∩</sup> <sup>R</sup><sup>2</sup> is a self-adjoint relation in R by Theorem 1.5.5. Now Lemma 3.4.1 (i)–(ii) imply (3.4.8).

Step 3. In order to show that S- <sup>=</sup> <sup>S</sup> <sup>∩</sup>K<sup>2</sup> is simple in the Hilbert space <sup>K</sup>, assume that there is an orthogonal decomposition K = K<sup>1</sup> ⊕ K<sup>2</sup> and a corresponding orthogonal decomposition S- <sup>=</sup> <sup>S</sup><sup>1</sup> <sup>⊕</sup> <sup>S</sup><sup>2</sup> such that <sup>S</sup><sup>2</sup> is self-adjoint in <sup>K</sup>2. Then ran (S<sup>2</sup> <sup>−</sup> <sup>λ</sup>) = <sup>K</sup><sup>2</sup> for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and thus

$$\mathcal{R}\_2 = \text{ran}\left(S\_2 - \lambda\right) \subset \text{ran}\left(S' - \lambda\right) \subset \text{ran}\left(S - \lambda\right), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

According to (3.4.5), this implies K<sup>2</sup> ⊂ R while K<sup>2</sup> ⊂ K = R⊥. Thus, K<sup>2</sup> = {0}, so that S is simple. -

**Corollary 3.4.5.** Let S be a closed symmetric relation in H. Then S is simple if and only if

$$\mathfrak{H} = \overline{\text{span}}\left\{ \mathfrak{N}\_{\lambda}(S^\*) \, : \, \lambda \in \mathbb{C} \, \backslash \, \mathbb{R} \right\}. \tag{3.4.11}$$

The set <sup>C</sup> \ <sup>R</sup> on the right-hand side can be replaced by any set <sup>U</sup> which has an accumulation point in C<sup>+</sup> and in C−.

Proof. It follows from Theorem 3.4.4 and the definition of K in (3.4.6) that the equality (3.4.11) holds if and only if S is simple. The last assertion in the corollary follows from Lemma 1.6.11. -

**Corollary 3.4.6.** Let S be a closed symmetric relation in H. Then the following statements are equivalent:


The set <sup>C</sup> \ <sup>R</sup> on the right-hand side in (i) can be replaced by any set <sup>U</sup> which has an accumulation point in C<sup>+</sup> and in C−.

Proof. (i) ⇒ (ii) The assumption implies in the context of Theorem 3.4.4 that R = mul S, so that

$$S'' = S \cap \mathfrak{R}^2 = \{0\} \times \operatorname{mult} S,$$

which is a self-adjoint relation in mul S. Hence, by the decomposition S = S- <sup>⊕</sup> <sup>S</sup>-- in Theorem 3.4.4 it follows that S-= Sop in Hop = H mul S.

(ii) ⇒ (i) Recall that H = Hop ⊕ Hmul . By Corollary 3.4.5 one has

$$\mathfrak{H}\_{\mathrm{op}} = \overline{\mathrm{span}}\left\{ \mathfrak{N}\_{\lambda}(S\_{\mathrm{op}}^{\*}) : \lambda \in \mathbb{C} \backslash \mathbb{R} \right\}.$$

From the decomposition <sup>S</sup> <sup>=</sup> <sup>S</sup>op <sup>⊕</sup> ({0} × mul <sup>S</sup>) and Proposition 1.3.13 one concludes that S<sup>∗</sup> = S<sup>∗</sup> op <sup>⊕</sup> ({0} × mul <sup>S</sup>). Hence, <sup>N</sup>λ(S∗) = <sup>N</sup>λ(S<sup>∗</sup> op ), which yields (i). -

**Lemma 3.4.7.** Let S be a closed simple symmetric relation in H. Then S is an operator and it has no eigenvalues.

Proof. Indeed, it follows from Definition 3.4.3 that also <sup>S</sup>−<sup>x</sup> and (S−x)−1, <sup>x</sup> <sup>∈</sup> <sup>R</sup>, are closed simple symmetric relations in <sup>H</sup>. In particular, (<sup>S</sup> <sup>−</sup>x)−<sup>1</sup> is an operator; cf. the discussion following Definition 3.4.3. This implies ker (S − x) = {0} for all <sup>x</sup> <sup>∈</sup> <sup>R</sup> and hence <sup>S</sup> has no eigenvalues. -

In certain situations the assertion in Lemma 3.4.7 has a converse.

**Proposition 3.4.8.** Let S be a closed symmetric relation in H and assume that there exists a self-adjoint extension A of S in H such that σ(A) = σp(A). If σp(S) = ∅, then the operator part Sop of S is a closed simple symmetric operator in the Hilbert space Hop = (mul S)⊥.

Proof. By Lemma 3.4.2 and Theorem 1.4.11, it suffices to consider the case where S is a closed symmetric operator and A is a self-adjoint extension of S. Now assume that S is not simple, so that by Theorem 3.4.4 there are nontrivial decompositions H = H- ⊕ H- and S = S- <sup>⊕</sup> <sup>S</sup>- with S closed, simple and symmetric in H- , and S- self-adjoint in H--. Then A decomposes accordingly as A = A- <sup>⊕</sup> <sup>S</sup>- with A- self-adjoint in H by Lemma 3.4.2. Now σ(A) = σp(A) implies that S- and thus S has a nontrivial point spectrum, which gives a contradiction. -

The notion of simplicity of a closed symmetric relation S in H will now be specified in a local sense. This will be done relative to a Borel set Δ <sup>⊂</sup> <sup>R</sup> and by means of a self-adjoint extension A of S and its spectral measure E(·). Then H admits the orthogonal decomposition

$$
\mathfrak{H} = E(\Delta)\mathfrak{H} \oplus (I - E(\Delta))\mathfrak{H},
$$

which leads to the orthogonal componentwise decomposition of A into self-adjoint components:

$$A = \left[A \cap \left(E(\Delta)\mathfrak{H}\right)^2\right] \widehat{\oplus} \left[A \cap \left((I - E(\Delta))\mathfrak{H}\right)^2\right].$$

Note that <sup>A</sup> <sup>∩</sup> (E(Δ)H)<sup>2</sup> is a self-adjoint operator in <sup>E</sup>(Δ)<sup>H</sup> which coincides with Aop Eop (Δ)Hop ; cf. Section 1.5.

**Definition 3.4.9.** Let S be a closed symmetric relation in H and let A be a selfadjoint extension of <sup>S</sup> with spectral measure <sup>E</sup>(·). Let Δ <sup>⊂</sup> <sup>R</sup> be a Borel set. Then <sup>S</sup> is said to be simple with respect to <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> and the self-adjoint extension <sup>A</sup> if

$$E(\Delta)\mathfrak{H} = \overline{\text{span}}\left\{ E(\Delta)k : k \in \mathfrak{N}\_{\lambda}(S^\*), \ \lambda \in \mathbb{C} \mid \mathbb{R} \right\}.\tag{3.4.12}$$

In the next proposition this local notion and some of its consequences are discussed.

**Proposition 3.4.10.** Let S be a closed symmetric relation in H and let A be a selfadjoint extension of S with spectral measure E(·). Assume that S is simple with respect to the Borel set <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> and the self-adjoint extension <sup>A</sup>. Then the following statements hold:

(i) For every Borel set Δ-⊂ Δ one has

$$E(\Delta')\mathfrak{H} = \overline{\text{span}}\left\{ E(\Delta')k : \, k \in \mathfrak{N}\_{\lambda}(S^\*), \,\lambda \in \mathbb{C} \,\backslash \,\mathbb{R} \right\}.\tag{3.4.13}$$

(ii) There is no point spectrum of S in Δ:

$$
\Delta \cap \sigma\_{\mathcal{P}}(S) = \emptyset.
$$

(iii) If U is a subset of ρ(A) with an accumulation point in each connected component of ρ(A), then

$$E(\Delta)\mathfrak{H} = \overline{\text{span}}\left\{ E(\Delta)k : \ k \in \mathfrak{N}\_{\lambda}(S^\*), \ \lambda \in \mathfrak{U} \right\}.\tag{3.4.14}$$

Proof. (i) First note that the inclusion (⊃) in (3.4.13) holds. To see the converse inclusion, let f ∈ E(Δ- )H. As Δ-⊂ Δ, one has

$$E(\Delta')\mathfrak{H} \subset E(\Delta)\mathfrak{H},$$

and hence f ∈ E(Δ)H. By assumption, the identity (3.4.12) holds and so, in the linear span of

$$\left\{ E(\Delta)k : \, k \in \mathfrak{N}\_{\lambda}(S^\*), \,\lambda \in \mathbb{C} \,\backslash \,\mathbb{R} \right\}$$

there exists a sequence (fn), that converges to f. Then (E(Δ- )fn) is a sequence in the linear span of

$$\left\{ E(\Delta')k : \, k \in \mathfrak{N}\_{\lambda}(S^\*), \,\lambda \in \mathbb{C} \,\backslash \,\mathbb{R} \right\}$$

which converges to E(Δ- )f = f. This shows the inclusion (⊂) in (3.4.13).

(ii) Assume that {f, xf} ∈ S for some x ∈ Δ. Since S ⊂ A, it follows that <sup>f</sup> <sup>∈</sup> <sup>E</sup>(Δ)H. Observe that for <sup>k</sup> <sup>∈</sup> <sup>N</sup>λ(S∗) with <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one has {k, λk} ∈ <sup>S</sup><sup>∗</sup> and hence (λk, f)=(k, xf). As <sup>x</sup> <sup>∈</sup> <sup>R</sup> and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, it follows that (k, f) = 0. Further, since f ∈ E(Δ)H, one concludes that

$$0 = (k, f) = (k, E(\Delta)f) = (E(\Delta)k, f).$$

for all <sup>k</sup> <sup>∈</sup> <sup>N</sup>λ(S∗) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Hence, (3.4.12) implies that <sup>f</sup> <sup>∈</sup> <sup>E</sup>(Δ)<sup>H</sup> is orthogonal to E(Δ)H, which shows that f = 0. Thus, S does not possess any eigenvalues in Δ.

(iii) The inclusion (⊃) in (3.4.14) is clear. In order to prove the identity, fix μ ∈ U and recall from Lemma 1.4.10 that the operator <sup>I</sup> +(λ−μ)(A−λ)−<sup>1</sup> maps <sup>N</sup>μ(S∗) bijectively onto <sup>N</sup>λ(S∗) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. It suffices to verify that the vectors E(Δ)k, k ∈ Nλ(S∗), λ ∈ U, span a dense set in E(Δ)H. Suppose that E(Δ)f is orthogonal to this set, that is,

$$0 = \left( E(\Delta)(I + (\lambda - \mu)(A - \lambda)^{-1})g\_{\mu}, E(\Delta)f \right) \tag{3.4.15}$$

for all g<sup>μ</sup> ∈ Nμ(S∗) and all λ ∈ U. Since for each g<sup>μ</sup> ∈ Nμ(S∗) the function

$$\lambda \mapsto \left( E(\Delta)(I + (\lambda - \mu)(A - \lambda)^{-1}) g\_{\mu}, E(\Delta)f \right)$$

is analytic on ρ(A), it follows from (3.4.15) and the assumption that U has an accumulation point in each connected component of ρ(A) that this function is identically equal to zero. Hence, (E(Δ)k, E(Δ)f) = 0 for all k ∈ Nλ(S∗) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Now (3.4.12) yields <sup>E</sup>(Δ)<sup>f</sup> = 0 and (iii) follows. -

The connection with the global notion of simplicity is given in the following corollary.

**Corollary 3.4.11.** Let S be a closed symmetric relation in H and let A be a selfadjoint extension of S with spectral measure E(·). Then S is simple if and only if <sup>S</sup> is simple with respect to any Borel set <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> and the self-adjoint extension <sup>A</sup>.

Proof. Assume that S is simple. Then (3.4.11) holds and hence (3.4.12) holds with Δ = R. Then Proposition 3.4.10 (i) implies that S is simple with respect to any Borel set Δ <sup>⊂</sup> <sup>R</sup> and the self-adjoint extension <sup>A</sup>. Conversely, if (3.4.12) holds for any Borel set Δ <sup>⊂</sup> <sup>R</sup>, then (3.4.12) also holds for Δ = <sup>R</sup>, and hence reduces to (3.4.11), that is, S is simple. -

In the following lemma the eigenspace of A corresponding to an eigenvalue x is described in the case where x is not an eigenvalue of S. In particular, this observation leads to a characterization of local simplicity if the Borel set Δ <sup>⊂</sup> <sup>R</sup> in Definition 3.4.9 is a singleton; cf. Corollary 3.4.13.

**Lemma 3.4.12.** Let S be a closed symmetric relation in H, let A be a self-adjoint extension of <sup>S</sup> with spectral measure <sup>E</sup>(·), and let <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Then

$$E(\{x\})\mathfrak{H} = E(\{x\})\mathfrak{N}\_{\lambda}(S^\*)\tag{3.4.16}$$

for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, if and only if <sup>x</sup> <sup>∈</sup> <sup>σ</sup>p(S).

Proof. Assume first that (3.4.16) holds for some fixed <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Assume that {f, xf} ∈ S, which implies {f, xf} ∈ A and hence f ∈ E({x})H. Moreover, one has (xf, kλ)=(f, λkλ) for all k<sup>λ</sup> ∈ Nλ(S∗) as {kλ, λkλ} ∈ S∗. It follows that

$$\left( \{xf, E(\{x\})k\_{\lambda} \} \right) = (xf, k\_{\lambda}) = (f, \lambda k\_{\lambda}) = \left( f, \lambda E(\{x\})k\_{\lambda} \right)$$

and hence (f,E({x})kλ) = 0 for all k<sup>λ</sup> ∈ Nλ(S∗). Now (3.4.16) and f ∈ E({x})H yield f = 0, which implies x ∈ σp(S).

Conversely, assume that <sup>x</sup> <sup>∈</sup> <sup>σ</sup>p(S) and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. The inclusion (⊃) in (3.4.16) is clear and since both subspaces in (3.4.16) are closed, it suffices to verify that E({x})Nλ(S∗) is dense in E({x})H. Suppose that there exists f ∈ E({x})H such that

$$\left(f, E(\{x\})k\_{\lambda}\right) = 0, \qquad k\_{\lambda} \in \mathfrak{N}\_{\lambda}(S^\*).$$

As f ∈ E({x})H, this implies (f, kλ) = 0 and hence f ∈ ran (S − λ). Choose {g, g- } ∈ S such that g-− λg = f. Then

$$g = (S - \overline{\lambda})^{-1} f = (A - \overline{\lambda})^{-1} f = \frac{1}{x - \overline{\lambda}} f$$

and

$$g' = f + \overline{\lambda}g = f + \frac{\lambda}{x - \overline{\lambda}}f = \frac{x}{x - \overline{\lambda}}f,$$

and it follows that {f, xf} ∈ S. Since x ∈ σp(S) by assumption this yields f = 0. Hence, <sup>E</sup>({x})Nλ(S∗) is dense in <sup>E</sup>({x})<sup>H</sup> and therefore (3.4.16) holds. -

The above lemma together with Proposition 3.4.10 (ii) implies that S is simple with respect to a point <sup>x</sup> <sup>∈</sup> <sup>R</sup> if and only if <sup>x</sup> is not an eigenvalue of <sup>S</sup>.

**Corollary 3.4.13.** Let S be a closed symmetric relation in H, let A be a self-adjoint extension of <sup>S</sup> with spectral measure <sup>E</sup>(·), and let <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Then

$$E(\{x\})\mathfrak{H} = \overline{\operatorname{span}}\left\{ E(\{x\})k \, :\, k \in \mathfrak{N}\_{\lambda}(S^\*),\,\lambda \in \mathbb{C}\,\backslash\,\mathbb{R} \right\}$$

holds if and only if x ∈ σp(S).

## **3.5 Eigenvalues and eigenspaces**

Let S be a closed symmetric relation in a Hilbert space H and let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ0, and corresponding γ-field γ and Weyl function M. The purpose of the present section is to characterize eigenvalues and the associated eigenspaces of the self-adjoint relation A<sup>0</sup> by means of the corresponding Weyl function M.

Recall that the Weyl function M can be expressed in terms of the γ-field and the resolvent of the self-adjoint relation A0; cf. Proposition 2.3.6 (v). In particular, for λ = x + iy, y > 0, and λ<sup>0</sup> ∈ ρ(A0) one has

$$\begin{split} M(x+iy) &= \operatorname{Re} M(\lambda\_0) + \gamma(\lambda\_0)^\* \left[ (x+iy-\operatorname{Re}\lambda\_0) \right. \\ &\left. + (x+iy-\lambda\_0)(x+iy-\overline{\lambda}\_0) \left( A\_0 - (x+iy) \right)^{-1} \right] \gamma(\lambda\_0). \end{split} \tag{3.5.1}$$

This formula will be used to study the behavior of the Weyl function M at a point <sup>x</sup> <sup>∈</sup> <sup>R</sup>. In the next proposition it turns out that the strong limit of iyM(<sup>x</sup> <sup>+</sup> iy), y ↓ 0, is closely connected with the eigenspace of A<sup>0</sup> at x. Here the spectral measure E of A<sup>0</sup> is not used explicitly in the assertion; the orthogonal projection onto the eigenspace Nx(A0) = ker (A<sup>0</sup> − x) is denoted by PNx(A0) instead of E({x}).

**Proposition 3.5.1.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, let M and γ be the corresponding Weyl function and <sup>γ</sup>-field, and let <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Then for each <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>ρ</sup>(A0) and all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> one has

$$\lim\_{y \downarrow 0} iyM(x+iy)\varphi = -|x-\lambda\_0|^2 \gamma(\lambda\_0)^\* P\_{\mathfrak{N}\_x(A\_0)} \gamma(\lambda\_0)\varphi. \tag{3.5.2}$$

Proof. For <sup>x</sup> <sup>∈</sup> <sup>R</sup> and <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>ρ</sup>(A0), it follows from (3.5.1) that

$$\begin{aligned} i y M(x + iy) &= i y \operatorname{Re} M(\lambda\_0) + i y \gamma(\lambda\_0)^\* (x + iy - \operatorname{Re} \lambda\_0) \gamma(\lambda\_0) \\ &+ i y \gamma(\lambda\_0)^\* (x + iy - \lambda\_0) (x + iy - \overline{\lambda}\_0) \left(A\_0 - (x + iy)\right)^{-1} \gamma(\lambda\_0) .\end{aligned}$$

As the first and second terms on the right-hand side tend to 0 as y ↓ 0, one obtains

$$\lim\_{y \downarrow 0} iyM(x+iy)\varphi = |x - \lambda\_0|^2 \gamma(\lambda\_0)^\* \left[ \lim\_{y \downarrow 0} iy \left( A\_0 - (x+iy) \right)^{-1} \right] \gamma(\lambda\_0)\varphi \tag{3.5.3}$$

for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. Since <sup>x</sup> <sup>∈</sup> <sup>R</sup> is fixed and <sup>y</sup> <sup>↓</sup> 0, one has that

$$\frac{iy}{t - (x + iy)} \to -\mathbf{1}\_x(t), \quad t \in \mathbb{R},$$

where the approximating functions are uniformly bounded by 1. The spectral calculus for the self-adjoint relation A<sup>0</sup> in Lemma 1.5.3 yields

$$\lim\_{y \downarrow 0} iy \left( A\_0 - (x + iy) \right)^{-1} \gamma(\lambda\_0) \varphi = -P\_{\mathfrak{N}\_x(A\_0)} \gamma(\lambda\_0) \varphi, \quad \varphi \in \mathcal{G}.$$

Now the assertion follows from (3.5.3). -

**Definition 3.5.2.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, and let M be the corresponding Weyl function. For <sup>x</sup> <sup>∈</sup> <sup>R</sup> the operator <sup>R</sup><sup>x</sup> : <sup>G</sup> <sup>→</sup> <sup>G</sup> is defined as the strong limit

$$\mathcal{R}\_x \varphi = \lim\_{y \downarrow 0} iyM(x+iy)\varphi, \qquad \varphi \in \mathcal{G}.$$

Observe that R<sup>x</sup> in Definition 3.5.2 is a well-defined operator in **B**(G); indeed, this is clear from the identity (3.5.2). It also follows from (3.5.2) that R<sup>x</sup> = 0 when <sup>x</sup> <sup>∈</sup> <sup>ρ</sup>(A0) <sup>∩</sup> <sup>R</sup>.

**Remark 3.5.3.** If <sup>x</sup> <sup>∈</sup> <sup>R</sup> is an isolated singularity of the function <sup>M</sup>, then <sup>x</sup> is a pole of first order of M; cf. Corollary 2.3.9. Moreover, in a sufficiently small punctured disc B<sup>x</sup> \ {x} centered at x such that M is holomorphic in B<sup>x</sup> \ {x}, one has a norm convergent Laurent series expansion of the form

$$M(\lambda) = \frac{M\_{-1}}{\lambda - x} + \sum\_{k=0}^{\infty} M\_k (\lambda - x)^k, \quad M\_{-1}, M\_0, M\_1, \dots \in \mathbf{B}(\mathcal{G}).$$

It follows that R<sup>x</sup> coincides with the residue of M at x, i.e.,

$$\mathcal{R}\_x = \frac{1}{2\pi i} \int\_{\mathcal{C}} M(\lambda) \, d\lambda = M\_{-1},$$

where C denotes the boundary of Bx.

In the following let <sup>x</sup> <sup>∈</sup> <sup>R</sup> and recall that the corresponding eigenspaces of <sup>S</sup> and A<sup>0</sup> are given by

$$\mathfrak{N}\_x(S) = \left\{ \{f, xf\} : f \in \mathfrak{N}\_x(S) \right\}, \quad \mathfrak{N}\_x(S) = \ker(S - x),$$

and

$$\dot{\mathfrak{N}}\_x(A\_0) = \left\{ \{f, xf\} : f \in \mathfrak{N}\_x(A\_0) \right\}, \quad \mathfrak{N}\_x(A\_0) = \ker\left(A\_0 - x\right).$$

The main interest will be in the closed linear subspace <sup>N</sup> <sup>x</sup>(A0) <sup>N</sup> <sup>x</sup>(S), which is the orthogonal complement of <sup>N</sup> <sup>x</sup>(S) in <sup>N</sup> <sup>x</sup>(A0). Similarly, the orthogonal complement of Nx(S) in Nx(A0) is denoted by Nx(A0) Nx(S).

**Lemma 3.5.4.** Let <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>ρ</sup>(A0), let <sup>x</sup> <sup>∈</sup> <sup>R</sup>, and let <sup>P</sup><sup>x</sup> be the orthogonal projection from H onto Nx(A0) Nx(S). Then the operator R<sup>x</sup> has the representation

$$\mathcal{R}\_x \varphi = (\lambda\_0 - x) \Gamma\_1 \left\{ P\_x \gamma(\lambda\_0) \varphi, x P\_x \gamma(\lambda\_0) \varphi \right\}, \quad \varphi \in \mathcal{G}. \tag{3.5.4}$$

Proof. First, recall from Corollary 2.3.3 that for <sup>x</sup> <sup>∈</sup> <sup>R</sup> and {h, xh} ∈ <sup>A</sup><sup>0</sup> one has

$$
\Gamma\_1 \{ h, xh \} = (x - \overline{\lambda}\_0) \gamma(\lambda\_0)^\* h, \quad \lambda \in \rho(A\_0).
$$

Now let ϕ ∈ G and consider

$$h = (\lambda\_0 - x)P\_{\mathfrak{N}\_x(A\_0)}\gamma(\lambda\_0)\varphi \in \ker\left(A\_0 - x\right).$$

According to Proposition 3.5.1 and Definition 3.5.2,

$$\begin{split} \mathcal{R}\_x \varphi &= -|x - \lambda\_0|^2 \gamma(\lambda\_0)^\* P\_{\mathfrak{R}\_x(A\_0)} \gamma(\lambda\_0) \varphi \\ &= (x - \overline{\lambda}\_0) \gamma(\lambda\_0)^\* h \\ &= \Gamma\_1 \{ h, xh \} \\ &= (\lambda\_0 - x) \Gamma\_1 \{ P\_{\mathfrak{R}\_x(A\_0)} \gamma(\lambda\_0) \varphi, xP\_{\mathfrak{R}\_x(A\_0)} \gamma(\lambda\_0) \varphi \}. \end{split} \tag{3.5.5}$$

Now observe that for ϕ ∈ G

$$P\_{\mathfrak{N}\_x(A\_0)}\gamma(\lambda\_0)\varphi = P\_x\gamma(\lambda\_0)\varphi + P\_{\mathfrak{N}\_x(S)}\gamma(\lambda\_0)\varphi.$$

Since {PNx(S)γ(λ0)ϕ, xPNx(S)γ(λ0)ϕ} ∈ S and S = ker Γ<sup>0</sup> ∩ ker Γ<sup>1</sup> by Proposition 2.1.2 (ii), it follows that

$$\Gamma\_1 \left\{ P\_{\mathfrak{N}\_x(S)} \gamma(\lambda\_0) \varphi, x P\_{\mathfrak{N}\_x(S)} \gamma(\lambda\_0) \varphi \right\} = 0$$

and hence (3.5.5) leads to (3.5.4). -

In the following theorem the eigenvalue <sup>x</sup> <sup>∈</sup> <sup>R</sup> and the corresponding eigenspace of A<sup>0</sup> are characterized by means of the Weyl function M and the operator Rx. Later it will be shown how to distinguish between isolated and embedded eigenvalues of A0; cf. Theorem 3.6.1.

**Theorem 3.5.5.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, let M and γ be the corresponding Weyl function and <sup>γ</sup>-field, and let <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Then the mapping

$$\tau: \dot{\mathfrak{N}}\_x(A\_0) \hookrightarrow \dot{\mathfrak{N}}\_x(S) \to \overline{\text{ran}}\,\mathscr{R}\_x, \quad \dot{f} \mapsto \Gamma\_1 \dot{f},\tag{3.5.6}$$

is an isomorphism. In particular,

$$x \in \sigma\_{\mathcal{P}}(A\_0) \text{ and } \dot{\mathfrak{N}}\_x(A\_0) \ominus \dot{\mathfrak{N}}\_x(S) \neq \{0\} \quad \Leftrightarrow \quad \mathcal{R}\_x \neq 0.$$

Proof. Let <sup>x</sup> <sup>∈</sup> <sup>R</sup> and define <sup>K</sup><sup>x</sup> <sup>=</sup> <sup>N</sup> <sup>x</sup>(A0) <sup>N</sup> <sup>x</sup>(S). The mapping Γ<sup>1</sup> : <sup>S</sup><sup>∗</sup> <sup>→</sup> <sup>G</sup> is continuous and, in particular, its restriction to K<sup>x</sup> ⊂ S<sup>∗</sup> is continuous. The proof consists of three steps. In Step 1 it will be shown that the restriction of Γ<sup>1</sup> to K<sup>x</sup> is injective and in Step 2 it will be shown that it has closed range. Then it follows from Step 3 that τ in (3.5.6) is an isomorphism.

Step 1. The restriction of the mapping Γ<sup>1</sup> to K<sup>x</sup> is injective. Indeed, let f <sup>∈</sup> <sup>K</sup><sup>x</sup> with Γ1f = 0. The assumption <sup>f</sup> <sup>∈</sup> <sup>K</sup><sup>x</sup> implies that <sup>f</sup> <sup>∈</sup> <sup>A</sup><sup>0</sup> and hence Γ0<sup>f</sup> = 0. Therefore, f <sup>∈</sup> ker Γ<sup>0</sup> <sup>∩</sup> ker Γ<sup>1</sup> <sup>=</sup> <sup>S</sup>. Since <sup>f</sup> <sup>=</sup> {f, xf} ∈ <sup>N</sup> <sup>x</sup>(A0) <sup>N</sup> <sup>x</sup>(S), this implies f = 0.

Step 2. The range of the restriction of Γ<sup>1</sup> to K<sup>x</sup> is closed. In fact, let (ϕn) be a sequence in ran (Γ<sup>1</sup> <sup>K</sup>x) such that <sup>ϕ</sup><sup>n</sup> <sup>→</sup> <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. Then there exists a sequence

(f <sup>n</sup>) in K<sup>x</sup> such that Γ1f <sup>n</sup> = ϕ<sup>n</sup> and as f <sup>n</sup> ∈ A<sup>0</sup> one has Γ0f <sup>n</sup> = 0. Therefore, Γf <sup>n</sup> = {0, ϕn}→{0, ϕ}. Recall from Proposition 2.1.2 that the restriction of Γ to S<sup>∗</sup> S is an isomorphism onto G × G. It follows that f <sup>n</sup> converge to some element f , which belongs to the closed subspace <sup>K</sup>x. This yields Γ1<sup>f</sup> <sup>=</sup> <sup>ϕ</sup> and hence ran (Γ<sup>1</sup> Kx) is closed.

Step 3. The linear space

$$\left\{ \left\{ P\_x \gamma(\lambda\_0) \varphi, x P\_x \gamma(\lambda\_0) \varphi \right\} : \varphi \in \mathfrak{G} \right\}.$$

is dense in the Hilbert space <sup>K</sup><sup>x</sup> <sup>=</sup> <sup>N</sup> <sup>x</sup>(A0) <sup>N</sup> <sup>x</sup>(S). To see this, let <sup>f</sup> <sup>∈</sup> <sup>K</sup><sup>x</sup> be orthogonal to all {Pxγ(λ0)ϕ, xPxγ(λ0)ϕ}, ϕ ∈ G. Then, since f <sup>=</sup> {f, xf}, Corollary 2.3.3 shows that for all ϕ ∈ G one has

$$\begin{aligned} 0 &= \left( f, \{ P\_x \gamma(\lambda\_0) \varphi, x P\_x \gamma(\lambda\_0) \varphi \} \right) \\ &= (f, P\_x \gamma(\lambda\_0) \varphi) + (x f, x P\_x \gamma(\lambda\_0) \varphi) \\ &= (1 + x^2) (f, \gamma(\lambda\_0) \varphi) \\ &= (1 + x^2) (\gamma(\lambda\_0)^\* f, \varphi) \\ &= (1 + x^2) (x - \overline{\lambda}\_0)^{-1} (\Gamma\_1 \widehat{f}, \varphi), \end{aligned}$$

so that Γ1f = 0, and hence <sup>f</sup> = 0 by Step 1.

Step 4. The mapping in (3.5.6) is an isomorphism. To see this, observe that

$$
\operatorname{ran} \mathcal{R}\_x \subset \operatorname{ran} \left( \Gamma\_1 \restriction \mathfrak{K}\_x \right) \subset \overline{\operatorname{ran}} \mathcal{R}\_x. \tag{3.5.7}
$$

The first inclusion in (3.5.7) follows directly from (3.5.4). From the same identity one also sees that

$$\,\_1\Gamma\_1\left\{P\_x\gamma(\lambda\_0)\varphi, xP\_x\gamma(\lambda\_0)\varphi\right\} = \frac{1}{\lambda\_0 - x}\,\mathcal{R}\_x\varphi \in \text{ran}\,\mathcal{R}\_x \subset \overline{\text{ran}}\,\mathcal{R}\_x.$$

Hence, the second inclusion in (3.5.7) follows from Step 3 and the boundedness of Γ1. It is clear from (3.5.7) and Step 2 that

$$\text{ran}\left(\Gamma\_1 \restriction \mathfrak{K}\_x\right) = \overline{\text{ran}}\,\mathcal{R}\_x,$$

and hence, due to Step 1, the mapping in (3.5.6) is an isomorphism. -

The statement of Theorem 3.5.5 can be simplified if x is not an eigenvalue of the symmetric relation <sup>S</sup>, that is, <sup>S</sup> satisfies a local simplicity condition at <sup>x</sup> <sup>∈</sup> <sup>R</sup>; cf. Corollary 3.4.13.

**Corollary 3.5.6.** Assume that x is not an eigenvalue of the closed symmetric relation S in Theorem 3.5.5. Then

$$x \in \sigma\_{\mathbf{p}}(A\_0) \quad \Leftrightarrow \quad \mathcal{R}\_x \neq 0.$$

Now the behavior of M at ∞ will be considered and the multivalued part of A<sup>0</sup> will be described. First recall that the self-adjoint relation A<sup>0</sup> is decomposed into the orthogonal sum

$$A\_0 = A\_{0, \text{op}} \left\| \dot{\oplus} A\_{0, \text{mul}} \right\|, \tag{3.5.8}$$

where A0,op is a self-adjoint operator in the Hilbert space

$$\mathfrak{H}\_{\rm op} = (\operatorname{mul} A\_0)^\perp = \overline{\operatorname{dom}} A\_0 \tag{3.5.9}$$

and A0,mul is the purely multivalued self-adjoint relation in Hmul = mul A0. Then the resolvent of A<sup>0</sup> has the form

$$(A\_0 - \lambda)^{-1} = \begin{pmatrix} (A\_{0, \text{op}} - \lambda)^{-1} & 0\\ 0 & 0 \end{pmatrix}, \qquad \lambda \in \rho(A\_0), \tag{3.5.10}$$

with respect to the decomposition H = Hop ⊕ Hmul ; cf. (1.5.1).

The representation (3.5.1) of M in terms of A<sup>0</sup> gives for λ<sup>0</sup> ∈ ρ(A0) and x = 0 that

$$\begin{split} M(iy) &= \text{Re}\, M(\lambda\_0) \\ &+ \gamma(\lambda\_0)^\* \left[ iy - \text{Re}\,\lambda\_0 + (iy - \lambda\_0)(iy - \overline{\lambda}\_0)(A\_0 - iy)^{-1} \right] \gamma(\lambda\_0). \end{split} \tag{3.5.11}$$

In order to use this formula for large y decompose the term γ(λ0)∗γ(λ0) as

$$
\gamma(\lambda\_0)^\* \gamma(\lambda\_0) = \gamma(\lambda\_0)^\* (I - P\_{\rm op}) \gamma(\lambda\_0) + \gamma(\lambda\_0)^\* \iota\_{\rm op} P\_{\rm op} \gamma(\lambda\_0),
$$

where Pop denotes the orthogonal projection from H onto Hop, ιop is the canonical embedding of Hop into H, and I − Pop is viewed as an orthogonal projection in H. From the representation of the resolvent of A<sup>0</sup> in terms of the resolvent of A0,op in (3.5.10) it follows that (3.5.11) may be rewritten as

$$\begin{split} M(iy) &= \text{Re}\,M(\lambda\_0) + (iy - \text{Re}\,\lambda\_0)\,\gamma(\lambda\_0)^\*(I - P\_{\text{op}})\gamma(\lambda\_0) \\ &+ \gamma(\lambda\_0)^\*\iota\_{\text{op}} \left[iy - \text{Re}\,\lambda\_0 + (iy - \lambda\_0)(iy - \overline{\lambda}\_0)(A\_{0,\text{op}} - iy)^{-1}\right]P\_{\text{op}}\gamma(\lambda\_0) \end{split} \tag{3.5.12}$$

for all y > 0. This formula will be used to study the behavior of M at ∞. It turns out that the strong limit <sup>1</sup> iyM(iy), y → +∞, is closely connected with the multivalued part of A0.

**Proposition 3.5.7.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, and let M and γ be the corresponding Weyl function and γ-field. Then for each λ<sup>0</sup> ∈ ρ(A0) and ϕ ∈ G one has

$$\lim\_{y \to +\infty} \frac{1}{iy} M(iy)\varphi = \gamma(\lambda\_0)^\*(I - P\_{\rm op})\gamma(\lambda\_0)\varphi. \tag{3.5.13}$$

Proof. It follows from (3.5.12) with λ<sup>0</sup> ∈ ρ(A0) that

$$\begin{split} \frac{1}{iy}M(iy) &= \frac{1}{iy}\text{Re}\,M(\lambda\_0) + \frac{iy - \text{Re}\,\lambda\_0}{iy}\gamma(\lambda\_0)^\*(I - P\_{\text{op}})\gamma(\lambda\_0) \\ &+ \frac{1}{iy}\gamma(\lambda\_0)^\*\iota\_{\text{op}}\left[iy - \text{Re}\,\lambda\_0 + (iy - \lambda\_0)(iy - \overline{\lambda}\_0)(A\_{0,\text{op}} - iy)^{-1}\right]P\_{\text{op}}\gamma(\lambda\_0). \end{split}$$

It suffices to show that the first and the third term on the right-hand side converge to 0 strongly. This is obvious for the first term on the right-hand side. For the third term note that for y → +∞ one has

$$\frac{iy - \operatorname{Re}\lambda\_0}{iy} + \frac{(iy - \lambda\_0)(iy - \overline{\lambda}\_0)}{iy} \frac{1}{t - iy} \to 0, \qquad t \in \mathbb{R},$$

and hence the spectral calculus for A0,op shows that for y → +∞

$$\frac{1}{iy}\gamma(\lambda\_0)^\*\iota\_{\rm op}\left[iy - \text{Re}\,\lambda\_0 + (iy - \lambda\_0)(iy - \overline{\lambda}\_0)(A\_{0,\text{op}} - iy)^{-1}\right]P\_{\rm op}\gamma(\lambda\_0)\varphi$$

tends to zero for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>; cf. Lemma 1.5.3. This leads to (3.5.13). -

**Definition 3.5.8.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, and let M be the corresponding Weyl function. The operator R<sup>∞</sup> : G → G is defined as the strong limit

$$\mathcal{R}\_{\infty}\varphi = \lim\_{y \to +\infty} \frac{1}{iy} M(iy)\varphi, \qquad \varphi \in \mathcal{G}.$$

It follows from Proposition 3.5.7 that R<sup>∞</sup> ∈ **B**(G). For the following properties of R<sup>∞</sup> recall the notations

$$\widehat{\mathfrak{N}}\_{\infty}(S) = \left\{ \{0, f\} : f \in \mathfrak{N}\_{\infty}(S) \right\}, \quad \mathfrak{N}\_{\infty}(S) = \operatorname{mult} S,$$

and

$$\dot{\mathfrak{N}}\_{\infty}(A\_0) = \left\{ \{0, f\} : f \in \mathfrak{N}\_{\infty}(A\_0) \right\}, \quad \mathfrak{N}\_{\infty}(A\_0) = \text{mult} \, A\_0.$$

The next lemma can be viewed as a variant of Lemma 3.5.4 for x = ∞. Here the main interest is in the closed subspace <sup>N</sup> <sup>∞</sup>(A0) <sup>N</sup> <sup>∞</sup>(S), that is, the orthogonal complement of <sup>N</sup> <sup>∞</sup>(S) in <sup>N</sup> <sup>∞</sup>(A0).

**Lemma 3.5.9.** Let λ<sup>0</sup> ∈ ρ(A0) and let P<sup>∞</sup> be the orthogonal projection from H onto N∞(A0) N∞(S). Then the operator R<sup>∞</sup> has the representation

$$\mathcal{R}\_{\infty}\varphi = \Gamma\_1\{0, P\_{\infty}\gamma(\lambda\_0)\varphi\}, \qquad \varphi \in \mathcal{G}. \tag{3.5.14}$$

Proof. First recall from Corollary 2.3.3 that for {0, h- } ∈ A<sup>0</sup> one has

$$
\Gamma\_1 \{ 0, h' \} = \gamma(\lambda\_0)^\* h', \qquad \lambda\_0 \in \rho(A\_0).
$$

Now let ϕ ∈ G and consider h- = (I − Pop)γ(λ0)ϕ ∈ mul A0. According to Proposition 3.5.7,

$$\begin{split} \mathcal{R}\_{\infty}\varphi &= \gamma(\lambda\_0)^\*(I - P\_{\rm op})\gamma(\lambda\_0)\varphi = \gamma(\lambda\_0)^\*h' = \Gamma\_1\{0, h'\} \\ &= \Gamma\_1\{0, (I - P\_{\rm op})\gamma(\lambda\_0)\varphi\}. \end{split} \tag{3.5.15}$$

Now observe that for ϕ ∈ G

$$(I - P\_{\rm op})\gamma(\lambda\_0)\varphi = P\_{\infty}\gamma(\lambda\_0)\varphi + P\_{\mathfrak{N}\_{\infty}(S)}\gamma(\lambda\_0)\varphi.$$

Since {0, PN∞(S)γ(λ0)ϕ} ∈ S and S = ker Γ<sup>0</sup> ∩ ker Γ<sup>1</sup> by Proposition 2.1.2 (ii), it follows that

$$\Gamma\_1 \left\{ 0, P\_{\mathfrak{N}\_{\infty}(S)} \gamma(\lambda\_0) \varphi \right\} = 0$$

and hence (3.5.15) leads to (3.5.14). -

In the next theorem the multivalued part of A<sup>0</sup> is characterized by means of the Weyl function M and the operator R∞.

**Theorem 3.5.10.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, and let M and γ be the corresponding Weyl function and γ-field. Then the mapping

$$\tau: \dot{\mathfrak{N}}\_{\infty}(A\_0) \ominus \dot{\mathfrak{N}}\_{\infty}(S) \to \mathsf{ran}\,\mathfrak{R}\_{\infty}, \quad \dot{f} \mapsto \Gamma\_1 \dot{f},\tag{3.5.16}$$

is an isomorphism. In particular,

$$\operatorname{mult} A\_0 \ominus \operatorname{mult} S \neq \{0\} \quad \Leftrightarrow \quad \mathcal{R}\_\infty \neq 0.$$

Proof. The proof follows a strategy similar to the one used in the proof of Theorem 3.5.5. To simplify notation, set

$$\mathfrak{K}\_{\infty} := \widehat{\mathfrak{N}}\_{\infty}(A\_0) \ominus \widehat{\mathfrak{N}}\_{\infty}(S) = \left\{ \{0, f'\} : f' \in \operatorname{mult} A\_0 \ominus \operatorname{mult} S \right\}.$$

The mapping Γ<sup>1</sup> : S<sup>∗</sup> → G is continuous and, in particular, its restriction to K<sup>∞</sup> ⊂ S<sup>∗</sup> is continuous.

Step 1. The restriction of the mapping Γ<sup>1</sup> to K<sup>∞</sup> is injective. Indeed, let f <sup>∈</sup> <sup>K</sup><sup>∞</sup> with Γ1f = 0. The assumption <sup>f</sup> <sup>∈</sup> <sup>K</sup><sup>∞</sup> implies that <sup>f</sup> <sup>∈</sup> <sup>A</sup><sup>0</sup> and hence Γ0<sup>f</sup> = 0. Therefore, f <sup>∈</sup> ker Γ<sup>0</sup> <sup>∩</sup> ker Γ<sup>1</sup> <sup>=</sup> <sup>S</sup>. Since <sup>f</sup> <sup>=</sup> {0, f- } ∈ <sup>N</sup> <sup>∞</sup>(A0) <sup>N</sup> <sup>∞</sup>(S), this implies f = 0.

Step 2. The range of the restriction of Γ<sup>1</sup> to K<sup>∞</sup> is closed. In fact, let (ϕn) be a sequence in ran (Γ<sup>1</sup> <sup>K</sup>∞) such that <sup>ϕ</sup><sup>n</sup> <sup>→</sup> <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. Then there exist (<sup>f</sup> <sup>n</sup>) in K<sup>∞</sup> such that Γ1f <sup>n</sup> = ϕ<sup>n</sup> and as f <sup>n</sup> ∈ A<sup>0</sup> one has Γ0f <sup>n</sup> = 0. Thus, Γf <sup>n</sup> = {0, ϕn}→{0, ϕ}, and since the restriction of Γ to S<sup>∗</sup> S is an isomorphism onto G × G, it follows that f <sup>n</sup> converge to some element f , which belongs to the closed subspace <sup>K</sup>∞. Therefore, Γ1f <sup>=</sup> <sup>ϕ</sup> and hence ran (Γ<sup>1</sup> <sup>K</sup>∞) is closed.

Step 3. The linear space

$$\left\{ \left\{ 0, P\_{\infty} \gamma(\lambda\_0) \varphi \right\} : \varphi \in \mathcal{G} \right\}$$

is dense in the Hilbert space <sup>K</sup><sup>∞</sup> <sup>=</sup> <sup>N</sup> <sup>∞</sup>(A0) <sup>N</sup> <sup>∞</sup>(S). To see this, let <sup>f</sup> <sup>∈</sup> <sup>K</sup><sup>∞</sup> be orthogonal to all elements {0, P∞γ(λ0)ϕ}, ϕ ∈ G. Then it follows from f <sup>=</sup> {0, f- } and Corollary 2.3.3 that for all ϕ ∈ G one has

$$0 = \left(\dot{f}, \{0, P\_{\infty}\gamma(\lambda\_0)\varphi\}\right) = (f', P\_{\infty}\gamma(\lambda\_0)\varphi) = (\gamma(\lambda\_0)^\*f', \varphi) = (\Gamma\_1 \dot{f}, \varphi),$$

so that Γ1f = 0, and hence <sup>f</sup> = 0 by Step 1.

Step 4. The mapping in (3.5.16) is an isomorphism. To see this, observe that

$$
\operatorname{ran}\mathcal{R}\_{\infty} \subset \operatorname{ran}\left(\Gamma\_1 \upharpoonright \mathcal{R}\_{\infty}\right) \subset \operatorname{\mathsf{Fall}}\mathcal{R}\_{\infty}.\tag{3.5.17}
$$

The first inclusion in (3.5.17) follows from (3.5.14). From the same identity one also sees that

$$\Gamma\_1 \{ 0, P\_{\infty} \gamma(\lambda\_0) \varphi \} = \mathcal{R}\_{\infty} \varphi \in \text{ran} \, \mathcal{R}\_{\infty} \subset \overline{\text{ran}} \, \mathcal{R}\_{\infty} \dots$$

Hence, the second inclusion in (3.5.17) follows from Step 3 and the boundedness of Γ1. It is clear from (3.5.17) and Step 2 that

$$\text{ran}\left(\Gamma\_1 \upharpoonright \mathfrak{K}\_{\infty}\right) = \mathsf{ran}\,\mathfrak{R}\_{\infty},$$

and hence, due to Step 1, the mapping in (3.5.16) is an isomorphism. -

**Corollary 3.5.11.** Assume that the closed symmetric relation S in Theorem 3.5.10 is an operator. Then A<sup>0</sup> is an operator if and only if R<sup>∞</sup> = 0.

An equivalent statement is that A<sup>0</sup> is an operator if and only if for all ϕ ∈ G

$$\lim\_{y \to +\infty} \frac{1}{iy} M(iy)\varphi = 0.\tag{3.5.18}$$

## **3.6 Spectra and local minimality**

As in Section 3.5, let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ0, and corresponding γ-field γ and Weyl function M. The spectrum of the self-adjoint extension A<sup>0</sup> and its division into absolutely continuous and singular spectra (cf. Section 3.3) will now be discussed in detail in terms of the boundary behavior of M. For this purpose it is assumed that S either is simple or satisfies a local simplicity condition with respect to an open interval Δ <sup>⊂</sup> <sup>R</sup> and the self-adjoint extension <sup>A</sup>0; see Definition 3.4.9 for the notion of local simplicity.

The following theorem describes the point spectrum and the continuous spectrum of A<sup>0</sup> in terms of the boundary behavior of the Weyl function M; cf. Proposition 3.3.1.

**Theorem 3.6.1.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ0, let M and γ be the corresponding Weyl function and <sup>γ</sup>-field, and let <sup>R</sup><sup>x</sup> = lim<sup>y</sup>↓<sup>0</sup> iyM(<sup>x</sup> <sup>+</sup> iy), <sup>x</sup> <sup>∈</sup> <sup>R</sup>, be the operator in Definition 3.5.2. Let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval and assume that the condition

$$E(\Delta)\mathfrak{H} = \overline{\text{span}}\left\{ E(\Delta)\gamma(\nu)\varphi : \nu \in \mathbb{C} \mid \mathbb{R}, \varphi \in \mathfrak{G} \right\} \tag{3.6.1}$$

is satisfied, where E(·) is the spectral measure of A0. Then the following statements hold for each x ∈ Δ:


Proof. (i) Recall first that by Proposition 2.3.6 (iii) or (v) the function λ → M(λ) is holomorphic on ρ(A0), which proves the implication (⇒). In order to verify the other implication assume that M can be continued analytically to some x ∈ Δ. Then there exists an open neighborhood <sup>O</sup> of <sup>x</sup> in <sup>C</sup> with <sup>O</sup> <sup>∩</sup> <sup>R</sup> <sup>⊂</sup> Δ to which <sup>M</sup> can be continued analytically. Choose a, b <sup>∈</sup> <sup>R</sup> with <sup>x</sup> <sup>∈</sup> (a, b), [a, b] <sup>⊂</sup> <sup>O</sup>, and a, b /∈ σp(A0). The spectral projection E((a, b)) of A<sup>0</sup> corresponding to the interval (a, b) is given by Stone's formula (1.5.7)

$$E((a,b)) = \lim\_{\delta \downarrow 0} \frac{1}{2\pi i} \int\_a^b \left( \left( A\_0 - (t+i\delta) \right)^{-1} - \left( A\_0 - (t-i\delta) \right)^{-1} \right) dt,$$

where the integral on the right-hand side is understood in the strong sense. For <sup>ν</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> this implies

$$\begin{split} \left\| E((a,b))\gamma(\nu)\varphi \right\|^2 &= \left(\gamma(\nu)^\* E((a,b))\gamma(\nu)\varphi, \varphi\right) \\ &= \lim\_{\delta \downarrow 0} \frac{1}{2\pi i} \int\_a^b \left( (\gamma(\nu)^\* (A\_0 - (t+i\delta))^{-1} \gamma(\nu)\varphi, \varphi) \right. \\ &\qquad \left. - \left( \gamma(\nu)^\* (A\_0 - (t-i\delta))^{-1} \gamma(\nu)\varphi, \varphi \right) \right) dt \end{split} \tag{3.6.2}$$

and the identities

$$\begin{aligned} &\gamma(\nu)^\* \Big(A\_0 - (t \pm i\delta)\Big)^{-1} \gamma(\nu) \\ &= \frac{M(t \pm i\delta)}{|t \pm i\delta - \nu|^2} + \frac{M(\mathcal{D})}{(\overline{\nu} - (t \pm i\delta))(\overline{\nu} - \nu)} + \frac{M(\nu)}{(\nu - (t \pm i\delta))(\nu - \overline{\nu})} \end{aligned}$$

from Proposition 2.3.6 (vi) (with λ = t ± iδ and μ = ν) together with the holomorphy of M in O yield that the integral on the right-hand side of (3.6.2) is zero. Hence, <sup>E</sup>((a, b))γ(ν)<sup>ϕ</sup> = 0 for all <sup>ν</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. On the other hand, since (a, b) ⊂ Δ, the assumption (3.6.1) and Proposition 3.4.10 (i) yield

$$E((a,b))\mathfrak{H} = \overline{\operatorname{span}}\left\{ E((a,b))\gamma(\nu)\varphi : \nu \in \mathbb{C} \mid \mathbb{R}, \varphi \in \mathcal{G} \right\},$$

and hence one concludes from <sup>E</sup>((a, b))γ(ν)<sup>ϕ</sup> = 0 for <sup>ν</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> that E((a, b)) = 0. In particular, x ∈ ρ(A0) by Proposition 3.3.1 (i).

(ii)–(iii) According to Proposition 3.4.10 (ii), the condition (3.6.1) implies that S does not have eigenvalues in Δ. Hence, items (ii) and (iii) follow immediately from item (i) and Corollary 3.5.6.

(iv) Assume that x ∈ Δ is an isolated eigenvalue of A0. Then by Proposition 2.3.6 (iii) or (v) there exists an open neighborhood O of x such that M is holomorphic on O \ {x}. Since x ∈ σp(S) by Proposition 3.4.10 (ii), it follows from Corollary 3.5.6 that there exists ϕ ∈ G such that

$$\mathcal{R}\_x \varphi = \lim\_{y \downarrow 0} iyM(x+iy)\varphi \neq 0. \tag{3.6.3}$$

This implies that M has a pole at x, which is of first order; cf. Corollary 2.3.9. By Remark 3.5.3 the residue of M at x is given by Rx. Conversely, if M has a pole (of first order) at x, then (3.6.3) holds for some ϕ ∈ G. Thus, x is an eigenvalue of A<sup>0</sup> by Corollary 3.5.6 and from item (i) it follows that there exists an open neighborhood <sup>O</sup> of <sup>x</sup> in <sup>C</sup> such that <sup>O</sup> \ {λ} ⊂ <sup>ρ</sup>(A0). Hence, <sup>x</sup> is an isolated point in the spectrum of A0. -

Under the condition that S is simple the spectrum of A<sup>0</sup> can be described completely in terms of the Weyl function M.

**Corollary 3.6.2.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, and let M and γ be the corresponding Weyl function and γ-field. Assume that S is simple. Then the assertions (i)–(iv) in Theorem 3.6.1 hold for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>.

To describe the absolutely continuous, singular, and singular continuous parts of the spectrum of A<sup>0</sup> in terms of the boundary behavior of the Weyl function M, some preliminary lemmas are needed.

**Lemma 3.6.3.** Let <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>ρ</sup>(A0), <sup>x</sup> <sup>∈</sup> <sup>R</sup>, and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. Then the (possibly improper ) limits

Im (M(x + i0)ϕ, ϕ) and Im (A<sup>0</sup> <sup>−</sup> (<sup>x</sup> <sup>+</sup> <sup>i</sup>0))−1γ(λ0)ϕ, γ(λ0)<sup>ϕ</sup> 

exist simultaneously, and they satisfy

$$\operatorname{Im}\left(M(x+i0)\varphi,\varphi\right) = |x-\lambda\_0|^2 \operatorname{Im}\left((A\_0-(x+i0))^{-1}\gamma(\lambda\_0)\varphi,\gamma(\lambda\_0)\varphi\right).$$

Proof. It is no restriction to assume that x = λ0, as otherwise λ → M(λ) and <sup>λ</sup> → (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> are both holomorphic at <sup>x</sup> <sup>=</sup> <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>ρ</sup>(A0) <sup>∩</sup> <sup>R</sup>, so that the above limits are zero and the identities hold.

For x = λ<sup>0</sup> it follows from (3.5.1) that

$$\begin{split} \operatorname{Im}\left(M(x+iy)\varphi,\varphi\right) &= y\|\gamma(\lambda\_0)\varphi\|^2 \\ &+ \left(|x-\lambda\_0|^2-y^2\right)\operatorname{Im}\left(\left(A\_0-(x+iy)\right)^{-1}\gamma(\lambda\_0)\varphi,\gamma(\lambda\_0)\varphi\right) \\ &+ 2(x-\operatorname{Re}\lambda\_0)y\operatorname{Re}\left(\left(A\_0-(x+iy)\right)^{-1}\gamma(\lambda\_0)\varphi,\gamma(\lambda\_0)\varphi\right). \end{split}$$

The first term on the right-hand side clearly goes to 0 as y ↓ 0. For the third term on the right-hand side, observe that for y ↓ 0 one has

$$y \operatorname{Re} \left( \frac{1}{t - (x + iy)} \right) = \frac{y(t - x)}{(t - x)^2 + y^2} \to 0, \quad t \in \mathbb{R},$$

and since the approximating functions are uniformly bounded, the spectral calculus for A<sup>0</sup> (see Lemma 1.5.3) yields

$$\lim\_{y \downarrow 0} y \operatorname{Re} \left( (A\_0 - (x + iy))^{-1} \gamma(\lambda\_0) \varphi, \gamma(\lambda\_0) \varphi \right) = 0.$$

Hence, also the third term on the right-hand side goes to 0 as y ↓ 0. Furthermore, |x − λ0| <sup>2</sup> <sup>−</sup> <sup>y</sup><sup>2</sup> → |<sup>x</sup> <sup>−</sup> <sup>λ</sup>0<sup>|</sup> <sup>2</sup> <sup>&</sup>gt; 0 as <sup>y</sup> <sup>↓</sup> 0. Therefore, Im (M(<sup>x</sup> <sup>+</sup> iy)ϕ, ϕ) converges as y ↓ 0 if and only if

$$\operatorname{Im}\left( (A\_0 - (x + iy))^{-1} \gamma(\lambda\_0) \varphi, \gamma(\lambda\_0) \varphi \right)$$

converges as y ↓ 0. In addition, it is clear that the identity in the lemma for the limits is satisfied. -

Recall that the self-adjoint extension A<sup>0</sup> generates a collection of finite Borel measures on <sup>R</sup>: for each <sup>h</sup> <sup>∈</sup> <sup>H</sup> the finite Borel measure <sup>μ</sup><sup>h</sup> in (3.3.2) is defined by μ<sup>h</sup> = (E(·)h, h), where E is the spectral measure of A0. Now the interest is in the Borel transform F<sup>h</sup> of μ<sup>h</sup> = (E(·)h, h), that is

$$F\_h(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d(E(t)h, h), \quad \lambda \in \mathbb{C} \text{ } \mathbb{R};$$

cf. Definition 3.1.3. In particular, if <sup>λ</sup> <sup>=</sup> <sup>x</sup> <sup>+</sup> iy, where <sup>x</sup> <sup>∈</sup> <sup>R</sup> and y > 0, then one has

$$\operatorname{Im} F\_h(x+iy) = \operatorname{Im} \left( (A\_0 - (x+iy))^{-1} h, h \right) \tag{3.6.4}$$

and

$$yF\_h(x+iy) = y\left(\left(A\_0 - (x+iy)\right)^{-1}h, h\right). \tag{3.6.5}$$

By means of Lemma 3.6.3 the boundary values of the Borel transform F<sup>h</sup> for a class of elements h ∈ H are expressed in terms of the boundary values of the Weyl function M.

**Lemma 3.6.4.** Let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval and let <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then for elements of the form h = E(Δ)γ(λ0)ϕ, ϕ ∈ G, the following statements hold:

(i) If x ∈ Δ, then the (possibly improper ) limits

$$\operatorname{Im} F\_h(x+i0) \quad \text{and} \quad \operatorname{Im} \left( M(x+i0)\varphi, \varphi \right)$$

exist simultaneously, and

$$\operatorname{Im} F\_h(x+i0) = |x - \lambda\_0|^{-2} \operatorname{Im} \left( M(x+i0)\varphi, \varphi \right).$$

(ii) If x ∈ Δ, then Im Fh(x + i0) = |x − λ0| <sup>−</sup>2Im (M(x + i0)ϕ, ϕ)=0.

Proof. It follows from (3.6.4) that for all h ∈ H the (possibly improper) limits Im <sup>F</sup>h(<sup>x</sup> <sup>+</sup> <sup>i</sup>0) and Im ((A<sup>0</sup> <sup>−</sup> (<sup>x</sup> <sup>+</sup> <sup>i</sup>0))−1h, h) exist simultaneously and coincide. For the choice h = γ(λ0)ϕ, ϕ ∈ G, it follows from Lemma 3.6.3 that the (possibly improper) limits Im Fh(x + i0) and Im (M(x + i0)ϕ, ϕ) exist simultaneously, and

$$\begin{aligned} \operatorname{Im} F\_h(x+i0) &= \operatorname{Im} \left( \left( A\_0 - (x+i0) \right)^{-1} \gamma(\lambda\_0) \varphi, \gamma(\lambda\_0) \varphi \right), \\ &= |x - \lambda\_0|^{-2} \operatorname{Im} \left( M(x+i0) \varphi, \varphi \right). \end{aligned}$$

If h = E(Δ)γ(λ0)ϕ, ϕ ∈ G, then for x ∈ Δ the spectral calculus implies

$$\begin{aligned} \operatorname{Im} F\_h(x+i0) &= \operatorname{Im} \left( \left( A\_0 - (x+i0) \right)^{-1} E(\Delta) \gamma(\lambda\_0) \varphi, E(\Delta) \gamma(\lambda\_0) \varphi \right) \\ &= \operatorname{Im} \left( \left( A\_0 - (x+i0) \right)^{-1} \gamma(\lambda\_0) \varphi, \gamma(\lambda\_0) \varphi \right) \\ &= |x - \lambda\_0|^{-2} \operatorname{Im} \left( M(x+i0) \varphi, \varphi \right), \end{aligned}$$

while for x ∈ Δ it follows that

$$\begin{split} \operatorname{Im} F\_h(x+i0) &= \operatorname{Im} \left( \left( A\_0 - (x+i0) \right)^{-1} E(\Delta) \gamma(\lambda\_0) \varphi, E(\Delta) \gamma(\lambda\_0) \varphi \right) \\ &= 0. \end{split}$$

This shows the assertions in (i) and (ii). -

Now the absolutely continuous spectrum, the singular spectrum, and the singular continuous spectrum (cf. Section 3.3) of A<sup>0</sup> can be described in terms of the boundary behavior of the Weyl function M, still under the assumption of local simplicity. The results are essentially consequences of Theorem 3.2.3, Theorem 3.2.6, and Corollary 3.3.6.

**Theorem 3.6.5.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with let A<sup>0</sup> = ker Γ0, and let M and γ be the corresponding Weyl function and <sup>γ</sup>-field. Let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval and assume that the condition

$$E(\Delta)\mathfrak{H} = \overline{\operatorname{span}}\left\{ E(\Delta)\gamma(\nu)\varphi : \nu \in \mathbb{C} \mid \mathbb{R}, \varphi \in \mathcal{G} \right\} \tag{3.6.6}$$

$$\Box$$

is satisfied, where E(·) is the spectral measure of A0. Then the absolutely continuous spectrum of A<sup>0</sup> in Δ is given by

$$\overline{\sigma\_{\rm ac}(A\_0) \cap \Delta} = \overline{\bigcup\_{\varphi \in \mathfrak{G}} \text{clos}\_{\text{ac}} \left( \{ x \in \Delta : 0 < \text{Im} \left( M(x + i0)\varphi, \varphi \right) < \infty \} \right)}. \tag{3.6.7}$$

If S is simple, then (3.6.7) holds for every open interval Δ, including Δ = R. Proof. By assumption, the span of the set

$$\mathcal{D}\_{\Delta} := \left\{ E(\Delta)\gamma(\nu)\varphi : \nu \in \mathbb{C} \; \backslash \; \mathbb{R}, \; \varphi \in \mathcal{G} \right\}$$

is dense in E(Δ)H and hence Corollary 3.3.6 implies the identity

$$\overline{\sigma\_{\mathrm{ac}}(A\_0) \cap \Delta} = \overline{\bigcup\_{h \in \mathcal{D}\_{\Delta}} \sigma(\mu\_{h, \mathrm{ac}})}.$$

According to Theorem 3.2.6 (i) (where the set F was replaced by R),

$$\sigma(\mu\_{h,\text{ac}}) = \text{clos}\_{\text{ac}}\left(\{x \in \mathbb{R} : 0 < \text{Im}\, F\_h(x+i0) < \infty\}\right),$$

which for h = E(Δ)γ(ν)ϕ ∈ D<sup>Δ</sup> is equivalent to

$$\sigma(\mu\_{h,\text{ac}}) = \text{clos}\_{\text{ac}}\left(\{x \in \Delta : 0 < \text{Im}\left(M(x+i0)\varphi, \varphi\right) < \infty\}\right)$$

by Lemma 3.6.4. This yields (3.6.7). -

The next corollary gives a necessary and sufficient condition for the absence of absolutely continuous spectrum.

**Corollary 3.6.6.** Let <sup>A</sup><sup>0</sup> and <sup>M</sup> be as in Theorem 3.6.5 and let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval such that the condition (3.6.6) is satisfied. Then

$$
\sigma\_{\mathrm{ac}}(A\_0) \cap \Delta = \emptyset
$$

if and only if for all ϕ ∈ G and for almost all x ∈ Δ

$$\operatorname{Im}\left(M(x+i0)\varphi,\varphi\right) = 0.$$

If S is simple, then the assertion holds for every open interval Δ, including Δ = R.

Proof. Since closac(B) = <sup>∅</sup> if and only if <sup>m</sup>(B) = 0 for any Borel set <sup>B</sup> <sup>⊂</sup> <sup>R</sup> by Lemma 3.2.5 (i), it is clear that for ϕ ∈ G

$$\text{clos}\_{\text{ac}}\left(\left\{x \in \Delta : 0 < \text{Im}\left(M(x+i0)\varphi, \varphi\right) < \infty\right\}\right) = \emptyset \tag{3.6.8}$$

if and only if

$$m\left(\left\{x \in \Delta : 0 < \operatorname{Im}\left(M(x+i0)\varphi, \varphi\right) < \infty\right\}\right) = 0.\tag{3.6.9}$$

Assume first that σac(A0) ∩ Δ = ∅. Then (3.6.7) yields (3.6.8) for all ϕ ∈ G, and hence (3.6.9) holds for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. Moreover, for <sup>h</sup> <sup>=</sup> <sup>γ</sup>(λ0)ϕ, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and <sup>x</sup> <sup>∈</sup> <sup>R</sup>

one has

$$\operatorname{Im}\left(M(x+i0)\varphi,\varphi\right) = |x-\lambda\_0|^2 \operatorname{Im} F\_h(x+i0)^2$$

by Lemma 3.6.4 (with Δ = R), and according to Theorem 3.1.4 (i) this limit exists and is finite for <sup>m</sup>-almost all <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Hence, (3.6.9) implies Im (M(x+i0)ϕ, ϕ)=0 for all ϕ ∈ G and m-almost all x ∈ Δ. For the converse implication assume that Im (M(x + i0)ϕ, ϕ) = 0 for all ϕ ∈ G and for m-almost all x ∈ Δ. Then (3.6.9) and hence also (3.6.8) hold for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. Thus, (3.6.7) yields <sup>σ</sup>ac(A0)<sup>∩</sup> Δ = <sup>∅</sup>. -

The next lemma is of similar nature as Lemma 3.6.4. Here the limits exist for all <sup>x</sup> <sup>∈</sup> <sup>R</sup> by (3.1.12)–(3.1.13) and Proposition 3.5.1.

**Lemma 3.6.7.** Let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval and let <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then for elements of the form h = E(Δ)γ(λ0)ϕ, ϕ ∈ G, one has

$$\lim\_{y \downarrow 0} y F\_h(x+iy) = \begin{cases} |x - \lambda\_0|^{-2} \lim\_{y \downarrow 0} y (M(x+iy)\varphi, \varphi), & x \in \Delta, \\ 0, & x \notin \overline{\Delta}. \end{cases}$$

Proof. For h = γ(λ0)ϕ, ϕ ∈ G, it follows from (3.6.5) and (3.5.1) (cf. (3.5.3) in the proof of Proposition 3.5.1) that

$$\begin{aligned} \lim\_{y \downarrow 0} y F\_h(x + iy) &= \lim\_{y \downarrow 0} y \left( \left( A\_0 - (x + iy) \right)^{-1} \gamma(\lambda\_0) \varphi, \gamma(\lambda\_0) \varphi \right) \\ &= |x - \lambda\_0|^{-2} \lim\_{y \downarrow 0} y (M(x + iy) \varphi, \varphi) \end{aligned}$$

for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>. If <sup>h</sup> <sup>=</sup> <sup>E</sup>(Δ)γ(λ0)ϕ, <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, then for <sup>x</sup> <sup>∈</sup> Δ the spectral calculus shows that

$$\begin{aligned} \lim\_{y \downarrow 0} y F\_h(x + iy) &= \lim\_{y \downarrow 0} y \left( \{ A\_0 - (x + iy) \} \right)^{-1} E(\Delta) \gamma(\lambda\_0) \varphi, E(\Delta) \gamma(\lambda\_0) \varphi \{ \} \\ &= \lim\_{y \downarrow 0} y \left( \{ A\_0 - (x + iy) \} \right)^{-1} \gamma(\lambda\_0) \varphi, \gamma(\lambda\_0) \varphi \{ \} \\ &= |x - \lambda\_0|^{-2} \lim\_{y \downarrow 0} y \langle M(x + iy) \varphi, \varphi \rangle, \end{aligned}$$

while for x ∈ Δ one has

$$\begin{aligned} \lim\_{y \downarrow 0} y F\_h(x+iy) &= \lim\_{y \downarrow 0} y \left( \left( A\_0 - (x+iy) \right)^{-1} E(\Delta) \gamma(\lambda\_0) \varphi, E(\Delta) \gamma(\lambda\_0) \varphi \right) \\ &= 0. \end{aligned}$$

This completes the proof. -

Next some inclusions for the singular and singular continuous spectra of A<sup>0</sup> will be shown.

**Theorem 3.6.8.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ0, and let M and γ be the corresponding Weyl function and <sup>γ</sup>-field. Let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval and assume that the condition

$$E(\Delta)\mathfrak{H} = \overline{\text{span}}\left\{ E(\Delta)\gamma(\nu)\varphi : \nu \in \mathbb{C} \mid \mathbb{R}, \varphi \in \mathcal{G} \right\} \tag{3.6.10}$$

is satisfied, where E(·) is the spectral measure of A0. Then the following statements hold:

(i) The singular spectrum of A<sup>0</sup> in Δ satisfies

$$\left(\sigma\_{\mathfrak{s}}(A\_0)\cap\Delta\right)\subset\overline{\bigcup\_{\varphi\in\mathfrak{G}}\left\{x\in\Delta:\operatorname{Im}\left(M(x+i0)\varphi,\varphi\right)=\infty\right\}}.$$

(ii) The singular continuous spectrum of A<sup>0</sup> in Δ, i.e., σsc(A0)∩ Δ, is contained in the set

$$\bigcup\_{\varphi \in \mathcal{G}} \operatorname{clos}\_{\mathbb{C}} \left( \left\{ x \in \Delta : \operatorname{Im} \left( M(x + i0)\varphi, \varphi \right) = \infty, \lim\_{y \downarrow 0} y(M(x + iy)\varphi, \varphi) = 0 \right\} \right).$$

If S is simple, then (i) and (ii) hold for every open interval Δ, including Δ = R. Proof. By assumption, the span of the set

$$\mathcal{D}\_{\Delta} := \left\{ E(\Delta)\gamma(\nu)\varphi : \nu \in \mathbb{C} \; \backslash \; \mathbb{R}, \; \varphi \in \mathcal{G} \right\}$$

is dense in E(Δ)H.

(i) Recall that by Corollary 3.3.6 one has

$$\overline{\sigma\_{\mathfrak{s}}(A\_0) \cap \Delta} = \overline{\bigcup\_{h \in \mathcal{D}\_{\Delta}} \sigma(\mu\_{h,\mathfrak{s}})} \tag{3.6.11}$$

and according to Theorem 3.2.6 (ii) (with F replaced by R)

$$\sigma(\mu\_{h,s}) \subset \overline{\{x \in \mathbb{R} : \operatorname{Im} F\_h(x+i0) = \infty\}}.$$

For h = E(Δ)γ(ν)ϕ ∈ D<sup>Δ</sup> this gives, via Lemma 3.6.4,

$$\sigma(\mu\_{h,s}) \subset \overline{\{x \in \Delta : \text{Im}\left(M(x+i0)\varphi, \varphi\right) = \infty\}}\text{.}$$

Hence, the set σs(A0) ∩ Δ in (3.6.11) is contained in

$$\begin{aligned} \overline{\bigcup\_{h \in \mathcal{D}\_{\Delta}} \overline{\{x \in \Delta : \operatorname{Im} \left(M(x+i0)\varphi, \varphi\right) = \infty\}} \\ = \overline{\bigcup\_{h \in \mathcal{D}\_{\Delta}} \{x \in \Delta : \operatorname{Im} \left(M(x+i0)\varphi, \varphi\right) = \infty\}}, \end{aligned}$$

which yields the assertion in (i).

(ii) Likewise, Corollary 3.3.6 implies

$$\overline{\sigma\_{\rm sc}(A\_0) \cap \Delta} = \overline{\bigcup\_{h \in \mathcal{D}\_{\Delta}} \sigma(\mu\_{h, \rm sc})}. \tag{3.6.12}$$

By Theorem 3.2.6 (iii) (again with F replaced by R),

$$\sigma(\mu\_{h,\text{sc}}) \subset \text{clos}\_{\mathbb{C}}\left(\left\{x \in \mathbb{R} : \text{Im}\, F\_h(x+i0) = \infty, \lim\_{y \downarrow 0} yF\_h(x+iy) = 0\right\}\right),$$

and for h = E(Δ)γ(ν)ϕ ∈ D<sup>Δ</sup> this gives, via Lemma 3.6.4 and Lemma 3.6.7, that σ(μh,sc) is contained in

$$\operatorname{colos}\_{\mathbb{C}}\left(\left\{x \in \Delta : \operatorname{Im}\left(M(x+i0)\varphi,\varphi\right) = \infty, \lim\_{y \downarrow 0} y(M(x+iy)\varphi,\varphi) = 0\right\}\right).$$

Hence, the assertion follows from (3.6.12). -

An immediate corollary of the previous theorem and Lemma 3.2.5 (ii) is a sufficient condition for the absence of the singular continuous spectrum in terms of the limit behavior of the function M.

**Corollary 3.6.9.** Let <sup>A</sup><sup>0</sup> and <sup>M</sup> be as in Theorem 3.6.8 and let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval such that the condition (3.6.10) is satisfied. Assume that for each ϕ ∈ G there exist at most countably many x ∈ Δ such that

$$\operatorname{Im}\left(M(x+iy)\varphi,\varphi\right)\to\infty \quad \text{and} \quad y\left(M(x+iy)\varphi,\varphi\right)\to 0 \quad \text{as} \quad y\downarrow 0.$$

Then

$$
\sigma\_{\rm sc}(A\_0) \cap \Delta = \emptyset.
$$

If S is simple, then the assertion holds for every open interval Δ, including Δ = R.

As a further corollary of the theorems of this section sufficient conditions are provided for the spectrum of A<sup>0</sup> to be purely absolutely continuous or purely singularly continuous, respectively, in some set.

**Corollary 3.6.10.** Let A<sup>0</sup> and M be as in Theorem 3.6.5 or Theorem 3.6.8 and let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval such that the condition (3.6.6) or (3.6.10) is satisfied. Assume that for all ϕ ∈ G and all x ∈ Δ

$$\lim\_{y \downarrow 0} yM(x+iy)\varphi = 0.\tag{3.6.13}$$

Then the following statements hold:


If S is simple and Δ is an open interval such that (3.6.13) holds for all ϕ ∈ G and all x ∈ Δ, then (i) and (ii) are satisfied.

Proof. Note first that the assumption (3.6.13) yields σp(A0) ∩ Δ = ∅; this follows immediately from Corollary 3.5.6 and the fact that the condition (3.6.6) or (3.6.10) implies σp(S) ∩ Δ = ∅; cf. Proposition 3.4.10 (ii). The assumption in (i) and Corollary 3.6.9 imply σsc(A0) ∩ Δ = ∅ and hence σ(A0) ∩ Δ = σac(A0) ∩ Δ. Similarly, the assumption in (ii) and Corollary 3.6.6 imply σac(A0) ∩ Δ = ∅ and hence <sup>σ</sup>(A0) <sup>∩</sup> Δ = <sup>σ</sup>sc(A0) <sup>∩</sup> Δ follows. -

## **3.7 Limit properties of Weyl functions**

Let S be a closed symmetric relation in a Hilbert space H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, and let M be the corresponding Weyl function. The aim of this section is to relate limit properties of the imaginary part Im M of the Weyl function with defect elements in dom A<sup>0</sup> and dom |A0| 1 <sup>2</sup> , and ran (A<sup>0</sup> − x) and ran |A<sup>0</sup> − x| 1 <sup>2</sup> , <sup>x</sup> <sup>∈</sup> <sup>R</sup>, respectively, where

$$A\_0 = A\_{0, \text{op}} \stackrel{\cdot}{\oplus} A\_{0, \text{mul}} \quad \text{and} \quad |A\_0| = |A\_{0, \text{op}}| \stackrel{\cdot}{\oplus} A\_{0, \text{mul}} \tag{3.7.1}$$

with respect to the usual decomposition H = Hop ⊕ Hmul . This also leads to necessary and sufficient conditions for S to be a densely defined operator in terms of the Weyl function.

The first result connects limit properties of the Weyl function at ∞ with elements in dom A<sup>0</sup> ∩ ker (S<sup>∗</sup> − λ) and dom |A0| 1 <sup>2</sup> ∩ ker (S<sup>∗</sup> − λ) for λ ∈ ρ(A0). Although the decomposition <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>A</sup><sup>0</sup> <sup>+</sup> <sup>N</sup> <sup>λ</sup>(S∗) is direct for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0), it may happen that dom A<sup>0</sup> ∩ ker (S<sup>∗</sup> − λ) = {0} if S<sup>∗</sup> is multivalued. In fact, if f ∈ dom A<sup>0</sup> ∩ ker (S<sup>∗</sup> − λ), f = 0, then {f,f- } ∈ A<sup>0</sup> for some f and hence {0, f-− λf} ∈ S∗. Since λ ∈ ρ(A0), this yields mul S<sup>∗</sup> = {0}.

The representation (3.5.12) of the Weyl function M in terms of the extension A<sup>0</sup> = ker Γ<sup>0</sup> will now be used; cf. (3.5.8)–(3.5.9). For simplicity one takes λ<sup>0</sup> = i in (3.5.12), which leads to the representation

$$\begin{split} M(iy) &= \text{Re}\,M(i) + iy\,\gamma(i)^{\*}(I - P\_{\text{op}})\gamma(i) \\ &\quad + \gamma(i)^{\*}\iota\_{\text{op}}\left[iy + (1 - y^{2})(A\_{0,\text{op}} - iy)^{-1}\right]P\_{\text{op}}\gamma(i) \end{split} \tag{3.7.2}$$

for all y > 0. The spectral calculus for the self-adjoint operator A0,op applied to (3.7.2) shows that for ϕ ∈ G and y > 0

$$\begin{split} \operatorname{Im} \left( M(iy)\varphi, \varphi \right) &= y \| (I - P\_{\operatorname{op}})\gamma(i)\varphi \| ^2 \\ &\quad + y \int\_{\mathbb{R}} \frac{t^2 + 1}{t^2 + y^2} \, d \left( E\_{\operatorname{op}}(t)P\_{\operatorname{op}}\gamma(i)\varphi, P\_{\operatorname{op}}\gamma(i)\varphi \right). \end{split} \tag{3.7.3}$$

**Proposition 3.7.1.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ0, and let M and γ be the corresponding Weyl function and γ-field. Then the following statements hold for ϕ ∈ G:

(i) γ(λ)ϕ ∈ dom A<sup>0</sup> for some, and hence for all λ ∈ ρ(A0) if and only if

$$\lim\_{y \to +\infty} y \operatorname{Im} \left( M(iy)\varphi, \varphi \right) < \infty;$$

(ii) γ(λ)ϕ ∈ dom |A0| 1 <sup>2</sup> for some, and hence for all λ ∈ ρ(A0) if and only if

$$\int\_{1}^{\infty} \frac{\operatorname{Im} \left( M(iy)\varphi, \varphi \right)}{y} dy < \infty. \tag{3.7.4}$$

Proof. (i) It suffices to prove the assertion for λ = i, since by Proposition 2.3.2 (ii)

$$\begin{aligned} \gamma(\lambda) &= \left( I + (\lambda - i)(A\_0 - \lambda)^{-1} \right) \gamma(i), \\ \gamma(i) &= \left( I + (i - \lambda)(A\_0 - i)^{-1} \right) \gamma(\lambda) \end{aligned} \tag{3.7.5}$$

for λ ∈ ρ(A0) and hence γ(i)ϕ ∈ dom A<sup>0</sup> if and only if γ(λ)ϕ ∈ dom A0. Note first that (3.7.3) yields

$$\begin{split} y \text{Im} \left( M(iy)\varphi, \varphi \right) &= y^2 \| (I - P\_{\text{op}})\gamma(i)\varphi \| ^2 \\ &+ \int\_{\mathbb{R}} \frac{y^2(t^2 + 1)}{t^2 + y^2} \, d(E\_{\text{op}}(t)P\_{\text{op}}\gamma(i)\varphi, P\_{\text{op}}\gamma(i)\varphi). \end{split} \tag{3.7.6}$$

It is clear that the left-hand side of (3.7.6) has a finite limit for y → +∞ if and only if (I − Pop)γ(i)ϕ = 0 and

$$\int\_{\mathbb{R}} t^2 \, d\left(E\_{\text{op}}(t) P\_{\text{op}}\gamma(i)\varphi, P\_{\text{op}}\gamma(i)\varphi\right) < \infty,$$

which follows from the monotone convergence theorem. In other words, the lefthand side of (3.7.6) has a finite limit for y → +∞ if and only if γ(i)ϕ ∈ dom A0.

(ii) As in the proof of (i), it suffices to verify the assertion for λ = i. In fact, if γ(i)ϕ ∈ dom |A0| 1 <sup>2</sup> , then γ(i)ϕ = (|A0| 1 <sup>2</sup> <sup>−</sup> <sup>μ</sup>)−1<sup>g</sup> for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>g</sup> <sup>∈</sup> <sup>H</sup>. The first identity in (3.7.5) and the functional calculus for the self-adjoint operator A0,op or self-adjoint relation A<sup>0</sup> (see Section 1.5) show

$$\begin{aligned} \gamma(\lambda)\varphi &= \left(I + (\lambda - i)(A\_0 - \lambda)^{-1}\right)(|A\_0|^{\frac{1}{2}} - \mu)^{-1}g \\ &= (|A\_0|^{\frac{1}{2}} - \mu)^{-1}\left(I + (\lambda - i)(A\_0 - \lambda)^{-1}\right)g \in \text{dom}\, |A\_0|^{\frac{1}{2}}.\end{aligned}$$

The same argument and the second identity in (3.7.5) show that γ(λ)ϕ∈dom|A0| 1 2 implies γ(i)ϕ ∈ dom |A0| 1 2 .

It follows from (3.7.3) that

$$\begin{split} \frac{\operatorname{Im}\left(M(iy)\varphi,\varphi\right)}{y} &= \left\|(I - P\_{\operatorname{op}})\gamma(i)\varphi\right\|^2 \\ &+ \int\_{\mathbb{R}} \frac{t^2 + 1}{t^2 + y^2} \, d\left(E\_{\operatorname{op}}(t)P\_{\operatorname{op}}\gamma(i)\varphi, P\_{\operatorname{op}}\gamma(i)\varphi\right). \end{split}$$

Hence, (3.7.4) holds if and only if (I − Pop)γ(i)ϕ = 0 and

$$\int\_1^\infty \left( \int\_{\mathbb{R}} \frac{t^2 + 1}{t^2 + y^2} \, d\big( E\_{\text{op}}(t) P\_{\text{op}} \gamma(i) \varphi, P\_{\text{op}} \gamma(i) \varphi \big) \right) \, dy < \infty.$$

Change the order of integration in the last integral, note that

$$(t^2+1)\int\_1^\infty \frac{1}{t^2+y^2} \, dy = (t^2+1)\frac{1}{|t|}\left(\frac{\pi}{2} - \arctan\frac{1}{|t|}\right), \qquad t \neq 0, 1$$

and observe that for large |t| one has

$$(t^2+1)\frac{1}{|t|}\left(\frac{\pi}{2}-\arctan\frac{1}{|t|}\right) \sim |t|^2$$

and that on compact subsets of R the function

$$t \mapsto \left(t^2 + 1\right) \frac{1}{|t|} \left(\frac{\pi}{2} - \arctan\frac{1}{|t|}\right).$$

is bounded. Hence, (3.7.4) holds if and only if (I − Pop)γ(i)ϕ = 0 and

$$\int\_{\mathbb{R}} |t| \, d \big( E\_{\text{op}}(t) P\_{\text{op}} \gamma(i) \varphi, P\_{\text{op}} \gamma(i) \varphi \big) < \infty.$$

In other words, (3.7.4) holds if and only if γ(i)ϕ ∈ dom |A0| 1 <sup>2</sup> . -

The following result is essentially a consequence of Proposition 3.7.1 (i).

**Corollary 3.7.2.** Let S, A0, and M be as in Proposition 3.7.1. Then dom S is dense in dom A<sup>0</sup> if and only if

$$\lim\_{y \to +\infty} y \operatorname{Im} \left( M(iy)\varphi, \varphi \right) = \infty \quad \text{for all} \quad \varphi \in \mathfrak{G}, \varphi \neq 0.$$

Proof. Let λ ∈ ρ(A0) and note that f ∈ (dom S)<sup>⊥</sup> if and only if for all {h, h- } ∈ S

$$0 = (f, h) = \left(f, (A\_0 - \overline{\lambda})^{-1} (h' - \overline{\lambda}h)\right) = \left((A\_0 - \lambda)^{-1} f, h' - \overline{\lambda}h\right).$$

Hence, <sup>f</sup> <sup>∈</sup> (dom <sup>S</sup>)<sup>⊥</sup> if and only if (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>f</sup> <sup>∈</sup> ker (S<sup>∗</sup> <sup>−</sup> <sup>λ</sup>) = ran <sup>γ</sup>(λ). Furthermore, (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>f</sup> = 0 if and only if <sup>f</sup> <sup>∈</sup> mul <sup>A</sup><sup>0</sup> = (dom <sup>A</sup>0)⊥.

Now assume that dom S is not dense in dom A0. Then there exists a nontrivial f ∈ dom A<sup>0</sup> such that f ∈ (dom S)⊥, and hence

$$(A\_0 - \lambda)^{-1} f \in \ker(S^\* - \lambda).$$

Since <sup>f</sup> <sup>∈</sup> dom <sup>A</sup><sup>0</sup> it follows that (A<sup>0</sup> <sup>−</sup>λ)−1<sup>f</sup> <sup>=</sup> <sup>γ</sup>(λ)<sup>ϕ</sup> for a nontrivial <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. This means γ(λ)ϕ ∈ dom A0, and hence

$$\lim\_{y \to +\infty} y \operatorname{Im} \left( M(iy)\varphi, \varphi \right) < \infty \tag{3.7.7}$$

by Proposition 3.7.1 (i). Conversely, if (3.7.7) holds for some nontrivial ϕ ∈ G, then by Proposition 3.7.1 (i) it follows that γ(λ)ϕ ∈ dom A0. Hence, there exists a nontrivial <sup>f</sup> <sup>∈</sup> dom <sup>A</sup><sup>0</sup> such that <sup>γ</sup>(λ)<sup>ϕ</sup> = (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup>f. Therefore, one sees that <sup>f</sup> <sup>∈</sup> (dom <sup>S</sup>)<sup>⊥</sup> and hence dom <sup>S</sup> is not dense in dom <sup>A</sup>0. -

**Corollary 3.7.3.** Let S, A0, and M be as in Proposition 3.7.1. Then S is a densely defined operator if and only if the following conditions hold:


In this case, S<sup>∗</sup> is an operator and all intermediate extensions of S are operators.

Proof. Note that Proposition 3.5.7 and the fact that γ(λ0)∗(I − Pop )γ(λ0) in (3.5.13) is a nonnegative operator in G show that condition (i) is equivalent to the condition

$$\lim\_{y \to +\infty} \frac{1}{iy} M(iy)\varphi = 0, \qquad \varphi \in \mathcal{G}.$$

By (3.5.18), this condition is necessary and sufficient for A<sup>0</sup> to be an operator, which is the case if and only if dom A<sup>0</sup> = H. Moreover, according to Corollary 3.7.2, the condition (ii) is necessary and sufficient for the equality dom S = dom A<sup>0</sup> to hold. Therefore, dom S = H if and only if conditions (i) and (ii) hold. -

In the next result, which is parallel to Proposition 3.7.1, the limit properties of the Weyl function at <sup>x</sup> <sup>∈</sup> <sup>R</sup> will be connected with elements in

$$\ker\left(S^\*-\lambda\right) \cap \operatorname{ran}\left(A\_0-x\right) \quad \text{and} \quad \ker\left(S^\*-\lambda\right) \cap \operatorname{ran}\left|A\_0-x\right|^\frac{1}{2}.$$

For this reason the representation (3.5.1) expressing the Weyl function M in terms of the self-adjoint relation A<sup>0</sup> = ker Γ<sup>0</sup> will be used. For simplicity one takes <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> such that Re <sup>λ</sup><sup>0</sup> <sup>=</sup> <sup>x</sup> in (3.5.1), which leads to

$$\begin{split} M(x+iy) &= \text{Re}\,M(\lambda\_0) \\ &+ \gamma(\lambda\_0)^\* \left[ iy + (|\text{Im}\,\lambda\_0|^2 - y^2) \{ A\_0 - (x+iy) \} \right]^{-1} \rangle \gamma(\lambda\_0). \end{split} \tag{3.7.8}$$

It follows by means of the spectral calculus applied to (3.7.8) that for <sup>x</sup> <sup>∈</sup> <sup>R</sup> and ϕ ∈ G one has

$$\begin{split} \frac{\text{Im}\,(M(x+iy)\varphi,\varphi)}{y} &= \|\gamma(\lambda\_0)\varphi\|^2 \\ &+ \left(|\text{Im}\,\lambda\_0|^2 - y^2\right) \int\_{\mathbb{R}} \frac{1}{(t-x)^2 + y^2} \, d(E(t)\gamma(\lambda\_0)\varphi, \gamma(\lambda\_0)\varphi). \end{split} \tag{3.7.9}$$

**Proposition 3.7.4.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S∗, let A<sup>0</sup> = ker Γ<sup>0</sup> be decomposed as in (3.7.1), and let M and γ be the corresponding Weyl function and γ-field. Then the following statements hold for <sup>x</sup> <sup>∈</sup> <sup>R</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>:

(i) γ(λ)ϕ ∈ ran (A<sup>0</sup> − x) for some, and hence for all λ ∈ ρ(A0) if and only if

$$\lim\_{y \downarrow 0} \frac{\text{Im}\left(M(x+iy)\varphi,\varphi\right)}{y} < \infty;\tag{3.7.10}$$

(ii) Popγ(λ)ϕ ∈ ran |A0,op − x| 1 <sup>2</sup> for some, and hence for all λ ∈ ρ(A0) if and only if

$$\int\_{0}^{1} \frac{\operatorname{Im} \left( M(x+iy)\varphi, \varphi \right)}{y} dy < \infty. \tag{3.7.11}$$

Proof. (i) It will first be shown that for λ, λ<sup>0</sup> ∈ ρ(A0) one has γ(λ)ϕ ∈ ran (A0−x) if and only if γ(λ0)ϕ ∈ ran (A<sup>0</sup> − x). Assume that γ(λ0)ϕ ∈ ran (A<sup>0</sup> − x). Then there is {f,f- } ∈ A<sup>0</sup> such that γ(λ0)ϕ = f-− xf. As

$$\left\{ f' - xf, (A\_0 - \lambda)^{-1} (f' - xf) \right\} \in (A\_0 - \lambda)^{-1},$$

it follows that

$$\left\{ (A\_0 - \lambda)^{-1} (f' - xf), f' - xf + (\lambda - x)(A\_0 - \lambda)^{-1} (f' - xf) \right\} \in A\_0 - x.$$
 
$$\text{Hence, } f' - xf + (\lambda - x)(A\_0 - \lambda)^{-1} (f' - xf) \in \text{ran} \,(A\_0 - x) \text{ and }$$

$$(A\_0 - \lambda)^{-1}(f' - xf) \in \text{ran}\,(A\_0 - x).$$

From the identity <sup>γ</sup>(λ)=(<sup>I</sup> + (<sup>λ</sup> <sup>−</sup> <sup>λ</sup>0)(A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1)γ(λ0), established in Proposition 2.3.2 (ii), one finds that

$$
\gamma(\lambda)\varphi = f' - xf + (\lambda - \lambda\_0)(A\_0 - \lambda)^{-1}(f' - xf) \in \text{ran}\,(A\_0 - x).
$$

Thus, γ(λ0)ϕ ∈ ran (A<sup>0</sup> − x) implies that γ(λ)ϕ ∈ ran (A<sup>0</sup> − x). Since λ<sup>0</sup> and λ in the above argument can be interchanged, it is clear that γ(λ)ϕ ∈ ran (A<sup>0</sup> − x) if and only if γ(λ0)ϕ ∈ ran (A<sup>0</sup> − x).

To verify the remaining assertion in (i) with λ = λ0, note first that the limit as y ↓ 0 in (3.7.10) is finite if and only if the limit of the integral in the second term in (3.7.9) is finite. An application of the monotone convergence theorem shows that the limit as y ↓ 0 of the integral in the second term in (3.7.9) is finite if and only if

$$\int\_{\mathbb{R}} \frac{1}{(t-x)^2} \, d(E(t)\gamma(\lambda\_0)\varphi, \gamma(\lambda\_0)\varphi) < \infty,$$

that is, if and only if

$$\int\_{\mathbb{R}} \frac{1}{(t-x)^2} \, d(E\_{\text{op}}(t)P\_{\text{op}}\gamma(\lambda\_0)\varphi, P\_{\text{op}}\gamma(\lambda\_0)\varphi) < \infty,$$

where the definition of the spectral measure E(·) of A<sup>0</sup> via the spectral measure Eop (·) of A0,op was used. Therefore, the limit as y ↓ 0 in (3.7.10) is finite if and only if <sup>P</sup>op <sup>γ</sup>(λ0)<sup>ϕ</sup> <sup>∈</sup> dom (A0,op <sup>−</sup> <sup>x</sup>)−<sup>1</sup> = ran (A0,op <sup>−</sup> <sup>x</sup>), that is, if and only if γ(λ0)ϕ ∈ ran (A<sup>0</sup> − x).

(ii) As in (i), it will first be shown that Popγ(λ)ϕ ∈ ran |A0,op − x| 1 <sup>2</sup> if and only if Popγ(λ0)ϕ ∈ ran |A0,op − x| 1 <sup>2</sup> for λ, λ<sup>0</sup> ∈ ρ(A0). Assume that

$$P\_{\rm op}\gamma(\lambda\_0)\varphi = |A\_{0,\rm op} - x|^{\frac{b}{2}}f$$

for some f ∈ dom |A0,op − x| 1 <sup>2</sup> . It follows from the functional calculus for unbounded self-adjoint operators that

$$\begin{aligned} \overline{\langle A\_{0, \text{op}} - \lambda \rangle^{-1} |A\_{0, \text{op}} - x|^{\frac{1}{2}}} &= \overline{|A\_{0, \text{op}} - x|^{\frac{1}{2}} (A\_{0, \text{op}} - \lambda)^{-1}} \\ &= |A\_{0, \text{op}} - x|^{\frac{1}{2}} (A\_{0, \text{op}} - \lambda)^{-1} \end{aligned}$$

and hence, since <sup>γ</sup>(λ)=(<sup>I</sup> + (<sup>λ</sup> <sup>−</sup> <sup>λ</sup>0)(A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1)γ(λ0), one has that

$$\begin{split} P\_{\rm op}\gamma(\lambda)\varphi &= P\_{\rm op}\gamma(\lambda\_{0})\varphi + (\lambda - \lambda\_{0})(A\_{0,\rm op} - \lambda)^{-1}P\_{\rm op}\gamma(\lambda\_{0})\varphi \\ &= |A\_{0,\rm op} - x|^{\frac{1}{2}}f + (\lambda - \lambda\_{0})(A\_{0,\rm op} - \lambda)^{-1}|A\_{0,\rm op} - x|^{\frac{1}{2}}f \\ &= |A\_{0,\rm op} - x|^{\frac{1}{2}}f + (\lambda - \lambda\_{0})|A\_{0,\rm op} - x|^{\frac{1}{2}}(A\_{0,\rm op} - \lambda)^{-1}f, \end{split}$$

that is, Popγ(λ)ϕ ∈ ran |A0,op − x| 1 <sup>2</sup> . Thus, Popγ(λ0)ϕ ∈ ran |A0,op − x| 1 <sup>2</sup> implies Popγ(λ)ϕ ∈ ran |A0,op − x| 1 <sup>2</sup> . Since λ<sup>0</sup> and λ in the above argument can be interchanged, it is clear that Popγ(λ0)ϕ ∈ ran |A0,op − x| 1 <sup>2</sup> holds if and only if Popγ(λ)ϕ ∈ ran |A0,op − x| 1 <sup>2</sup> holds.

To verify the remaining assertion in (ii), it is convenient to fix <sup>λ</sup> <sup>=</sup> <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> such that |Im λ0| > 1. One then concludes from (3.7.9) that the integral in (3.7.11) converges if and only if the integral

$$\int\_0^1 \left( \int\_{\mathbb{R}} \frac{1}{(t-x)^2 + y^2} \, d(E(t)\gamma(\lambda\_0)\varphi, \gamma(\lambda\_0)\varphi) \right) dy dt$$

converges. Changing the order of integration in the last integral and observing that

$$\int\_0^1 \frac{1}{(t-x)^2 + y^2} \, dy = \frac{1}{|t-x|} \arctan \frac{1}{|t-x|}, \qquad t \neq x,$$

one sees that the integral in (3.7.11) converges if and only if

$$\int\_{\mathbb{R}} \frac{1}{|t - x|} \arctan \frac{1}{|t - x|} \, d(E(t)\gamma(\lambda\_0)\varphi, \gamma(\lambda\_0)\varphi) < \infty. \tag{3.7.12}$$

Since the integrand in (3.7.12) is bounded on <sup>R</sup> \ (<sup>x</sup> <sup>−</sup> <sup>1</sup>, x + 1), it follows that the integral in (3.7.11) converges if and only

$$\int\_{x-1}^{x+1} \frac{1}{|t-x|} \, d(E(t)\gamma(\lambda\_0)\varphi, \gamma(\lambda\_0)\varphi) < \infty,$$

which is equivalent to

$$\int\_{\mathbb{R}} \frac{1}{|t - x|} \, d(E(t)\gamma(\lambda\_0)\varphi, \gamma(\lambda\_0)\varphi) < \infty$$

and to

$$\int\_{\mathbb{R}} \frac{1}{|t - x|} \, d(E\_{\text{op}}\,(t)P\_{\text{op}}\,\gamma(\lambda\_0)\varphi, P\_{\text{op}}\,\gamma(\lambda\_0)\varphi) < \infty.$$

Therefore, (3.7.11) holds if and only if

$$P\_{\rm op} \gamma(\lambda\_0)\varphi \in \text{dom}\, |A\_{0,\rm op} - x|^{-\frac{1}{2}} = \text{ran}\, |A\_{0,\rm op} - x|^{\frac{1}{2}},$$

that is, if and only if γ(λ0)ϕ ∈ ran |A<sup>0</sup> − x| 1

## **3.8 Spectra and local minimality for self-adjoint extensions**

In this section the results on eigenvalues, eigenspaces, continuous, absolutely continuous and singular continuous spectra from Section 3.5 and Section 3.6 will be explicitly formulated for arbitrary self-adjoint extensions of a symmetric relation.

Let S be a closed symmetric relation in H and let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with γ-field γ and Weyl function M. Consider a self-adjoint extension

$$A\_{\Theta} = \left\{ \widehat{f} \in S^\* : \Gamma \widehat{f} \in \Theta \right\} = \ker \left( \Gamma\_1 - \Theta \Gamma\_0 \right) \tag{3.8.1}$$

of S in H, where Θ = Θ<sup>∗</sup> is a self-adjoint relation in G. Recall from Corollary 1.10.9 that there exist operators A, B ∈ **B**(G) with the properties

$$\mathcal{A}^\* \mathcal{B} = \mathcal{B}^\* \mathcal{A}, \quad \mathcal{A} \mathcal{B}^\* = \mathcal{B} \mathcal{A}^\*, \quad \mathcal{A}^\* \mathcal{A} + \mathcal{B}^\* \mathcal{B} = I = \mathcal{A} \mathcal{A}^\* + \mathcal{B} \mathcal{B}^\*,$$

such that

$$\Theta = \left\{ \{ \mathcal{A}\varphi, \mathcal{B}\varphi \} : \varphi \in \mathcal{G} \right\} = \left\{ \{ \psi, \psi' \} \in \mathcal{G}^2 : \mathcal{A}^\* \psi' = \mathcal{B}^\* \psi \right\}.$$

According to Section 2.2, the self-adjoint extensions A<sup>Θ</sup> in (3.8.1) can also be written in the form

$$A\_{\Theta} = \{ \widehat{f} \in S^\* \, : \, \mathcal{A}^\* \Gamma\_1 \widehat{f} = \mathcal{B}^\* \Gamma\_0 \widehat{f} \}.$$

In order to describe the spectrum of A<sup>Θ</sup> consider the boundary triplet {G, Γ- <sup>0</sup>, Γ- 1}, where

$$
\begin{pmatrix} \Gamma'\_0 \\ \Gamma'\_1 \end{pmatrix} = \begin{pmatrix} \mathcal{B}^\* & -\mathcal{A}^\* \\ \mathcal{A}^\* & \mathcal{B}^\* \end{pmatrix} \begin{pmatrix} \Gamma\_0 \\ \Gamma\_1 \end{pmatrix}; \tag{3.8.2}
$$

cf. Corollary 2.5.11. Then one has

$$A\_{\Theta} = \ker \Gamma\_0',\tag{3.8.3}$$

<sup>2</sup> . - and the corresponding Weyl function and γ-field will be denoted by M<sup>Θ</sup> and γΘ. For λ ∈ ρ(AΘ) ∩ ρ(A0) they are given by

$$M\_{\oplus}(\lambda) = \left(\mathcal{A}^\* + \mathcal{B}^\* M(\lambda)\right) \left(\mathcal{B}^\* - \mathcal{A}^\* M(\lambda)\right)^{-1} \tag{3.8.4}$$

and

$$
\gamma\_\Theta(\lambda) = \gamma(\lambda) \left( \mathbb{B}^\* - \mathcal{A}^\* M(\lambda) \right)^{-1},
$$

respectively; cf. (2.5.17) and (2.5.18). From (3.8.3) it is clear that the spectrum of A<sup>Θ</sup> can be described by means of the Weyl function MΘ. Therefore, the earlier results expressing the spectrum of A<sup>0</sup> in terms of the Weyl function M (and the γ-field γ) can now be simply translated to the present context. The main results will be listed below; it is left to the reader to formulate analogs of the results in Section 3.7 in the present setting.

First the analogs of Theorem 3.5.5 and Theorem 3.5.10 will be described. For this purpose define the operators R<sup>Θ</sup> <sup>x</sup> , <sup>x</sup> <sup>∈</sup> <sup>R</sup>, and <sup>R</sup><sup>Θ</sup> <sup>∞</sup> similar to Definition 3.5.2 and Definition 3.5.8:

$$\mathcal{R}\_x^{\Theta} \varphi = \lim\_{y \downarrow 0} iy M\_{\Theta}(x+iy) \varphi, \qquad \varphi \in \mathcal{G},$$

and

$$\mathcal{R}\_{\infty}^{\Theta}\varphi = \lim\_{y \to +\infty} \frac{1}{iy} M\_{\Theta}(iy)\varphi, \qquad \varphi \in \mathcal{G}.$$

As in Section 3.5, one has that R<sup>Θ</sup> <sup>x</sup> , R<sup>Θ</sup> <sup>∞</sup> ∈ **B**(G). In terms of the boundary triplet {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} in (3.8.2) and the corresponding Weyl function M<sup>Θ</sup> in (3.8.4), Theorem 3.5.5 and Corollary 3.5.6 read as follows.

**Corollary 3.8.1.** Let <sup>S</sup>, <sup>A</sup>Θ, and <sup>M</sup><sup>Θ</sup> be as above and let <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Then the mapping

$$\tau: \widehat{\mathfrak{N}}\_x(A\_{\Theta}) \ominus \widehat{\mathfrak{N}}\_x(S) \to \overline{\operatorname{ran}} \mathcal{R}\_x^{\Theta}, \quad \widehat{f} \mapsto \mathcal{A}^\* \Gamma\_0 \widehat{f} + \mathcal{B}^\* \Gamma\_1 \widehat{f},$$

is an isomorphism. In particular,

$$\forall x \in \sigma\_{\text{p}}(A\_{\Theta}) \text{ and } \widehat{\mathfrak{N}}\_{x}(A\_{\Theta}) \ominus \widehat{\mathfrak{N}}\_{x}(S) \neq \{0\} \quad \Leftrightarrow \quad \mathcal{R}\_{x}^{\Theta} \neq 0,$$

and if <sup>x</sup> <sup>∈</sup> <sup>σ</sup>p(S), then <sup>x</sup> <sup>∈</sup> <sup>σ</sup>p(AΘ) if and only if <sup>R</sup><sup>Θ</sup> <sup>x</sup> = 0.

Similarly, Theorem 3.5.10 and Corollary 3.5.11 take the following form.

**Corollary 3.8.2.** Let S, AΘ, and M<sup>Θ</sup> be as above. Then the mapping

$$\tau: \widehat{\mathfrak{N}}\_{\infty}(A\_{\Theta}) \ominus \widehat{\mathfrak{N}}\_{\infty}(S) \to \overline{\operatorname{ran}} \mathcal{R}^{\Theta}\_{\infty}, \quad \widehat{f} \mapsto \mathcal{A}^\* \Gamma\_0 \widehat{f} + \mathcal{B}^\* \Gamma\_1 \widehat{f},$$

is an isomorphism. In particular,

$$\operatorname{mult} A\_{\Theta} \ominus \operatorname{mult} S \neq \{0\} \quad \Leftrightarrow \quad \mathcal{R}\_{\infty}^{\Theta} \neq 0,$$

and if mul <sup>S</sup> <sup>=</sup> {0}, then <sup>A</sup><sup>Θ</sup> is an operator if and only if <sup>R</sup><sup>Θ</sup> <sup>∞</sup> = 0.

For the next results the local simplicity condition appearing in many of the results in Section 3.6 has to be reformulated with respect to AΘ. According to Definition 3.4.9, the closed symmetric relation <sup>S</sup> is simple with respect to Δ <sup>⊂</sup> <sup>R</sup> and the self-adjoint extension A<sup>Θ</sup> if

$$E\_{\Theta}(\Delta)\mathfrak{H} = \overline{\text{span}}\left\{ E\_{\Theta}(\Delta)\gamma\_{\Theta}(\nu)\varphi : \nu \in \mathbb{C} \; \middle\langle \; \mathbb{R}, \varphi \in \mathcal{G} \right\},\tag{3.8.5}$$

where EΘ(·) is the spectral measure of AΘ.

Then Theorem 3.6.1 yields the following statement.

**Corollary 3.8.3.** Let <sup>S</sup>, <sup>A</sup>Θ, and <sup>M</sup><sup>Θ</sup> be as above, let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval, and assume that the local simplicity condition (3.8.5) is satisfied. Then the following statements hold for each x ∈ Δ:


If <sup>S</sup> is simple, then the statements (i)–(iv) hold for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>.

Finally, the corresponding results for the absolutely continuous, singular, and singular continuous spectra will be formulated; it is left to the reader to state the analogs of Corollaries 3.6.6, 3.6.9, and 3.6.10.

In the present situation Theorem 3.6.5 reads as follows.

**Corollary 3.8.4.** Let <sup>S</sup>, <sup>A</sup>Θ, and <sup>M</sup><sup>Θ</sup> be as above, let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval, and assume that the local simplicity condition (3.8.5) is satisfied. Then the absolutely continuous spectrum of A<sup>Θ</sup> in Δ is given by

$$\overline{\sigma\_{\rm ac}(A\_{\Theta}) \cap \Delta} = \bigcup\_{\varphi \in \mathfrak{G}} \text{clos}\_{\text{ac}} \left( \{ x \in \Delta : 0 < \text{Im} \left( M\_{\Theta}(x + i0)\varphi, \varphi \right) < \infty \} \right). \tag{3.8.6}$$

If S is simple, then (3.8.6) holds for every open interval Δ, including Δ = R.

For the singular and singular continuous spectra one obtains the following version of Theorem 3.6.8.

**Corollary 3.8.5.** Let <sup>S</sup>, <sup>A</sup>Θ, and <sup>M</sup><sup>Θ</sup> be as above, let <sup>Δ</sup> <sup>⊂</sup> <sup>R</sup> be an open interval, and assume that the local simplicity condition (3.8.5) is satisfied. Then the following statements hold:

(i) The singular spectrum of A<sup>Θ</sup> in Δ satisfies

$$\left(\sigma\_s(A\_{\Theta}) \cap \Delta\right) \subset \overline{\bigcup\_{\varphi \in \mathfrak{G}} \left\{ x \in \Delta : \operatorname{Im} \left( M\_{\Theta}(x + i0)\varphi, \varphi \right) = \infty \right\}}.$$

(ii) The singular continuous spectrum of A<sup>Θ</sup> in Δ, σsc(AΘ) ∩ Δ, is contained in the set

$$\bigcup\_{\varphi \in \mathcal{G}} \operatorname{clos}\_{\mathbb{C}} \left( \{ x \in \Delta : \operatorname{Im} \left( M\_{\Theta} (x + i0) \varphi, \varphi \right) = \infty, \lim\_{y \downarrow 0} y (M\_{\Theta} (x + iy) \varphi, \varphi) = 0 \} \right).$$

If S is simple, then (i) and (ii) hold for every open interval Δ, including Δ = R.

Finally, the special case where the self-adjoint relation Θ in (3.8.1) is a bounded self-adjoint operator will be briefly discussed. In this situation there is a more natural choice of the transformed boundary triplet {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} above. In fact, if S is a closed symmetric relation, {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> with γ-field γ and Weyl function M, and Θ ∈ **B**(G) is self-adjoint, then, by Corollary 2.5.7, the mappings

$$
\Gamma\_0' = \Gamma\_1 - \Theta \Gamma\_0 \quad \text{and} \quad \Gamma\_1' = -\Gamma\_0.
$$

lead to a boundary triplet {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} for S<sup>∗</sup> such that

$$\ker \Gamma\_0' = \ker \left( \Gamma\_1 - \Theta \Gamma\_0 \right) = A\_{\Theta}.$$

For λ ∈ ρ(A0) ∩ ρ(AΘ) the corresponding γ-field γ<sup>Θ</sup> and the Weyl function M<sup>Θ</sup> are given by

$$\gamma\_{\Theta}(\lambda) = -\gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \quad \text{and} \quad M\_{\Theta}(\lambda) = \left(\Theta - M(\lambda)\right)^{-1},\tag{3.8.7}$$

respectively. Then the above results in Corollaries 3.8.1–3.8.5 remain valid with the function M<sup>Θ</sup> in (3.8.7) and the mapping f → <sup>A</sup>∗Γ0<sup>f</sup> +B∗Γ1<sup>f</sup> in Corollary 3.8.1 and Corollary 3.8.2 replaced by f → −Γ0<sup>f</sup> .

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 4**

## **Operator Models for Nevanlinna Functions**

The classes of Weyl functions and more generally of Nevanlinna functions will be studied from the point of view of reproducing kernel Hilbert spaces. It is clear from Chapter 2 that every Weyl function is a uniformly strict Nevanlinna function and it is one of the main objectives here to show that also the converse is true: every uniformly strict Nevanlinna function is a Weyl function. The model space is built as a reproducing kernel Hilbert space of holomorphic functions. A brief introduction to reproducing kernel Hilbert spaces is given in Section 4.1. Using the Nevanlinna kernel in Section 4.2 multiplication operators by the independent variable are studied and a boundary triplet whose Weyl function is the original Nevanlinna function is constructed. For scalar Nevanlinna functions an alternative model in an L2-space is given in Section 4.3. The uniqueness of these constructions will also be discussed in detail. An extension of the operator model in Section 4.2 to Nevanlinna functions which are not necessarily uniformly strict, and to Nevanlinna families is provided in Section 4.4. This also includes a discussion of generalized resolvents, and as a byproduct one obtains the Sz.-Nagy dilation theorem. The connection with extension theory is given via the compressed resolvents of self-adjoint relations in the Kre˘ın–Na˘ımark formula in Section 4.5. It will be shown that for every Nevanlinna family there is a self-adjoint exit space extension whose compressed resolvent is parametrized by the Nevanlinna family. Closely connected is the discussion about the orthogonal coupling of two boundary triplets in Section 4.6, which also complements the considerations in Section 2.7.

## **4.1 Reproducing kernel Hilbert spaces**

The following discussion of reproducing kernel Hilbert spaces is focused on what is needed in this text. Within these bounds there is a complete treatment for the reader's convenience. In the first definition one restricts attention to open sets, as the emphasis will be on reproducing kernel Hilbert spaces of holomorphic functions.

**Definition 4.1.1.** Let Ω <sup>⊂</sup> <sup>C</sup> be an open set and let <sup>G</sup> be a Hilbert space. A mapping

$$\mathsf{K}(\cdot,\cdot): \Omega \times \Omega \to \mathbf{B}(\mathcal{G}) \tag{4.1.1}$$

is called a **B**(G)-valued kernel on Ω. The kernel K(·, ·), the kernel K for short, is said to be

(i) nonnegative, if for any finite set of points λ1,...,λ<sup>n</sup> ∈ Ω and any choice of vectors ϕ1,...,ϕ<sup>n</sup> ∈ G the n × n matrix

$$\left( (\mathsf{K}(\lambda\_i, \lambda\_j)\varphi\_j, \varphi\_i)\_{\mathcal{G}} \right)\_{i,j=1}^n$$

is nonnegative;


Note that the first two items in this definition are not independent. Nonnegativity is the stronger condition.

**Lemma 4.1.2.** Let K(·, ·) be a **B**(G)-valued kernel on Ω as in (4.1.1). If K(·, ·) is nonnegative, then K(·, ·) is symmetric.

Proof. Let ϕ, ψ ∈ G and λ, μ ∈ Ω. According to (i), the 2 × 2 matrix

$$
\begin{pmatrix}
(\mathsf{K}(\lambda,\lambda)\varphi,\varphi)\_{\mathfrak{G}} & (\mathsf{K}(\lambda,\mu)\psi,\varphi)\_{\mathfrak{G}} \\
(\mathsf{K}(\mu,\lambda)\varphi,\psi)\_{\mathfrak{G}} & (\mathsf{K}(\mu,\mu)\psi,\psi)\_{\mathfrak{G}}
\end{pmatrix}
$$

is nonnegative, and, hence hermitian. In particular, this implies that

$$(\mathsf{K}(\lambda,\mu)\psi,\varphi)\_{\mathfrak{G}} = \overline{(\mathsf{K}(\mu,\lambda)\varphi,\psi)\_{\mathfrak{G}}} = (\psi,\mathsf{K}(\mu,\lambda)\varphi)\_{\mathfrak{G}}.$$

for all ϕ, ψ ∈ G. This gives K(λ, μ)<sup>∗</sup> = K(μ, λ), so that the kernel K(·, ·) is symmetric. -

The kernels described in Definition 4.1.1 form the basis of the theory of reproducing kernel Hilbert spaces. They arise naturally in the following context. Let Ω <sup>⊂</sup> <sup>C</sup> be an open set and let (H,·, ·) be a Hilbert space of functions defined on Ω with values in a Hilbert space G. The Hilbert space H is called a reproducing kernel Hilbert space if for all μ ∈ Ω the operation of point evaluation

$$f \in \mathfrak{H} \mapsto f(\mu) \in \mathfrak{G}$$

is bounded. In other words, for each μ ∈ Ω the linear operator E(μ) : H → G, defined by E(μ)f = f(μ), belongs to **B**(H, G).

In the next theorem a kernel is related to a Hilbert space of functions in which point evaluation is bounded.

**Theorem 4.1.3.** Let G be a Hilbert space and assume that (H,·, ·) is a Hilbert space of <sup>G</sup>-valued functions on an open set <sup>Ω</sup> <sup>⊂</sup> <sup>C</sup> such that point evaluation is bounded for all μ ∈ Ω. Define the corresponding kernel K(·, ·) by

$$\mathbb{K}(\lambda, \mu) = E(\lambda)E(\mu)^\* \in \mathbf{B}(\mathcal{G}), \quad \lambda, \mu \in \Omega.$$

Then the following statements hold:

(i) For f ∈ H one has the reproducing kernel property

$$\langle f, \mathsf{K}(\cdot, \mu)\varphi \rangle = (f(\mu), \varphi)\_{\mathfrak{G}}, \quad \varphi \in \mathfrak{G}, \quad \mu \in \Omega. \tag{4.1.2}$$

(ii) The identity

$$\langle \mathsf{K}(\cdot,\nu)\eta,\mathsf{K}(\cdot,\mu)\varphi \rangle = (\mathsf{K}(\mu,\nu)\eta,\varphi)\_{\mathfrak{G}}$$

is valid for all ν, μ ∈ Ω and η, ϕ ∈ G.


Proof. (i) & (ii) Note that for f ∈ H, ϕ ∈ G, and μ ∈ Ω one has

$$(f(\mu), \varphi)\_{\mathfrak{G}} = (E(\mu)f, \varphi)\_{\mathfrak{G}} = \langle f, E(\mu)^\* \varphi \rangle\_{\mathfrak{G}}$$

and observe that E(μ)∗ϕ is a function in H whose value at λ ∈ Ω is given by

$$(E(\mu)^\* \varphi)(\lambda) = E(\lambda)E(\mu)^\* \varphi = \mathsf{K}(\lambda, \mu)\varphi.$$

This implies that K(·, ·) has the reproducing kernel property (4.1.2). The identity in (ii) follows with the special choice f(·) = K(·, ν)η.

(iii) To see that K(·, ·) is a nonnegative kernel it suffices to observe that the matrix

$$\begin{aligned} \left( \left( \mathsf{K}(\lambda\_i, \lambda\_j) \varphi\_j, \varphi\_i \right)\_{\mathcal{G}} \right)\_{i,j=1}^n &= \left( \left( E(\lambda\_i) E(\lambda\_j)^\* \varphi\_j, \varphi\_i \right)\_{\mathcal{G}} \right)\_{i,j=1}^n \\ &= \left( \left< E(\lambda\_j)^\* \varphi\_j, E(\lambda\_i)^\* \varphi\_i \right> \right)\_{i,j=1}^n \end{aligned}$$

is nonnegative. Lemma 4.1.2 implies that K(·, ·) is symmetric.

(iv) In order to see that the subspace span {K(·, μ)ϕ : μ ∈ Ω, ϕ ∈ G} is dense in H, assume that there is an element f ∈ H such that f,K(·, μ)ϕ = 0 for all ϕ ∈ G and μ ∈ Ω. But then (f(μ), ϕ)<sup>G</sup> = 0 for all ϕ ∈ G and μ ∈ Ω. Hence, f(μ) = 0 for all μ ∈ G, i.e., f is the null function, which completes the argument.

(v) To see that K(·, ·) is uniformly bounded on compact subsets of Ω, note first that

$$\|\mathbb{K}(\lambda,\lambda)\| = \|E(\lambda)E(\lambda)^\*\| = \|E(\lambda)\|^2. \tag{4.1.3}$$

Now observe that for all f ∈ H and ϕ ∈ G,

$$(E(\lambda)f, \varphi)\_{\mathfrak{G}} = (f(\lambda), \varphi)\_{\mathfrak{G}^\perp}$$

Since by assumption the function λ → f(λ) from Ω to G is holomorphic, it follows that the mapping λ → E(λ) from Ω to **B**(H, G) is holomorphic, which implies that λ → E(λ) is continuous. Hence, for any compact set K ⊂ Ω there is some M-≥ 0 such that

$$\sup\_{\lambda \in K} \|E(\lambda)\| \le M'.$$

Therefore, (4.1.3) shows that the kernel K(·, ·) is uniformly bounded on compact subsets of Ω. -

It has been shown in Theorem 4.1.3 that a Hilbert space of G-valued functions in which point evaluation is bounded gives rise to a nonnegative kernel that possesses the reproducing kernel property in (4.1.2). Now it will be shown, conversely, that any nonnegative kernel K(·, ·) gives rise to such a reproducing kernel Hilbert space. Assume that K(·, ·) is some nonnegative kernel on Ω with values in **B**(G). Then K(·, ·) is automatically symmetric by Lemma 4.1.2. Consider the linear space of functions from Ω into G generated by K(·, ·) via

$$\bar{\mathfrak{H}}(\mathsf{K}) := \text{span}\left\{\lambda \mapsto \mathsf{K}(\lambda, \mu)\varphi \, : \, \mu \in \Omega, \, \varphi \in \mathcal{G}\right\}.\tag{4.1.4}$$

Define the form ·, · on generating elements by

$$\langle \mathsf{K}(\cdot,\nu)\eta, \mathsf{K}(\cdot,\mu)\varphi \rangle := (\mathsf{K}(\mu,\nu)\eta, \varphi)\_{\mathbb{S}}, \quad \nu, \mu \in \Omega, \ \eta, \varphi \in \mathbb{S}, \tag{4.1.5}$$

and extend it to a form on ˚H(K) by

$$\left\langle \sum\_{j=1}^{n} \alpha\_{j} \mathbb{K}(\cdot, \nu\_{j}) \varphi\_{j}, \sum\_{i=1}^{m} \beta\_{i} \mathbb{K}(\cdot, \mu\_{i}) \psi\_{i} \right\rangle = \sum\_{i,j=1}^{n,m} \left( \mathbb{K}(\mu\_{i}, \nu\_{j}) \alpha\_{j} \varphi\_{j}, \beta\_{i} \psi\_{i} \right)\_{\mathcal{G}}, \tag{4.1.6}$$

where <sup>α</sup><sup>j</sup> , β<sup>i</sup> <sup>∈</sup> <sup>C</sup>, <sup>ν</sup><sup>j</sup> , μ<sup>i</sup> <sup>∈</sup> Ω, and <sup>ϕ</sup><sup>j</sup> , ψ<sup>i</sup> <sup>∈</sup> <sup>G</sup> for <sup>j</sup> = 1,...,n, <sup>i</sup> = 1,...,m.

In particular, one has by (4.1.5) for all <sup>f</sup> <sup>∈</sup> ˚H(K)

$$<\langle f, \mathsf{K}(\cdot, \mu)\varphi\rangle = (f(\mu), \varphi)\_{\mathfrak{G}}, \qquad \mu \in \Omega, \ \varphi \in \mathfrak{G}.\tag{4.1.7}$$

Thus, the definition of the form ·, · in (4.1.6) implies the reproducing kernel property in (4.1.7); the kernel K(·, ·) is called a reproducing kernel, relative to the linear space ˚H(K) in (4.1.4). It will now be shown that the form defined by (4.1.5) or (4.1.6) is actually a scalar product.

**Lemma 4.1.4.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>C</sup> be an open set, let <sup>G</sup> be a Hilbert space, and let the kernel <sup>K</sup>(·, ·) in (4.1.1) be nonnegative. Define the space ˚H(K) by (4.1.4) and define the form ·, · on ˚H(K) as in (4.1.5) and (4.1.6). Then ˚H(K) is a pre-Hilbert space with the scalar product ·, ·.

Proof. A straightforward calculation shows that ·, · is a well-defined sesquilinear form on ˚H(K). By Lemma 4.1.2, the kernel <sup>K</sup>(·, ·) is symmetric and this yields that ·, · is symmetric. In order to show that ·, · is nonnegative on ˚H(K), observe that

$$\begin{aligned} \left< \sum\_{j=1}^{n} \alpha\_{j} \mathbb{K}(\cdot, \nu\_{j}) \varphi\_{j}, \sum\_{i=1}^{n} \alpha\_{i} \mathbb{K}(\cdot, \nu\_{i}) \varphi\_{i} \right> &= \sum\_{i,j=1}^{n} \left< \mathbb{K}(\nu\_{i}, \nu\_{j}) \alpha\_{j} \varphi\_{j}, \alpha\_{i} \varphi\_{i} \right>\_{\mathcal{G}} \\ &= \sum\_{i,j=1}^{n} \left( \left< \mathbb{K}(\nu\_{i}, \nu\_{j}) \varphi\_{j}, \varphi\_{i} \right>\_{\mathcal{G}} \alpha\_{j}, \alpha\_{i} \right), \end{aligned} \tag{4.1.8}$$

for all <sup>ν</sup>1,...,ν<sup>n</sup> <sup>∈</sup> Ω, <sup>ϕ</sup>1,...,ϕ<sup>n</sup> <sup>∈</sup> <sup>G</sup>, and <sup>α</sup>1,...,α<sup>n</sup> <sup>∈</sup> <sup>C</sup>. Clearly, the last term is equal to

$$
\begin{pmatrix}
\begin{pmatrix}
(\mathsf{K}(\nu\_{1},\nu\_{1})\varphi\_{1},\varphi\_{1})\_{\mathfrak{G}} & \cdots & (\mathsf{K}(\nu\_{1},\nu\_{n})\varphi\_{n},\varphi\_{1})\_{\mathfrak{G}} \\
\vdots & & \vdots \\
(\mathsf{K}(\nu\_{n},\nu\_{1})\varphi\_{1},\varphi\_{n})\_{\mathfrak{G}} & \cdots & (\mathsf{K}(\nu\_{n},\nu\_{n})\varphi\_{n},\varphi\_{n})\_{\mathfrak{G}}
\end{pmatrix}
\begin{pmatrix}
\alpha\_{1} \\
\vdots \\
\alpha\_{n}
\end{pmatrix},
\begin{pmatrix}
\alpha\_{1} \\
\vdots \\
\alpha\_{n}
\end{pmatrix}
\end{pmatrix}.
$$

The assumption that the kernel K(·, ·) is nonnegative means that the n×n matrix

$$\left[ (\mathsf{K}(\nu\_i, \nu\_j)\varphi\_j, \varphi\_i)\_{\mathsf{G}} \right]\_{i,j=1}^n$$

is nonnegative. Thus for a typical element

$$f = \sum\_{j=1}^{n} \alpha\_j \mathsf{K}(\cdot, \nu\_j) \varphi\_j \in \mathring{\mathfrak{H}}(\mathsf{K}),$$

one sees that f,f ≥ 0 and hence ·, · is a nonnegative symmetric sesquilinear form on ˚H(K). In particular, ·, · satisfies the Cauchy–Schwarz inequality. This implies that ·, · is positive definite. In fact, if f,f = 0 for some <sup>f</sup> <sup>∈</sup> ˚H(K), then

$$\left| \langle f, g \rangle \right|^2 \le \langle f, f \rangle \langle g, g \rangle = 0 \quad \text{for all} \quad g \in \mathring{\mathfrak{H}}(\mathsf{K}).$$

Hence, with g = K(·, μ)ψ, μ ∈ Ω, ψ ∈ G, the reproducing kernel property (4.1.7) shows that

$$0 = \langle f, \mathbb{K}(\cdot, \mu)\psi \rangle = (f(\mu), \psi)\_{\mathbb{S}^1}$$

Thus, <sup>f</sup>(μ) = 0 for all <sup>μ</sup> <sup>∈</sup> Ω and so <sup>f</sup> = 0 <sup>∈</sup> ˚H(K).

Summing up, it has been shown that ·, · is a positive definite symmetric sesquilinear form on ˚H(K), that is, ·, · is a scalar product and (˚H(K),·, ·) is a pre-Hilbert space. -

In the following theorem it is shown that a nonnegative kernel K(·, ·) on Ω produces a Hilbert space H(K), as a completion of ˚H(K), of functions on Ω for which point evaluation is a continuous map. Moreover, if the kernel is holomorphic and uniformly bounded on compact subsets of Ω, then the functions in the resulting Hilbert space are holomorphic.

**Theorem 4.1.5.** Let G be a Hilbert space, let K(·, ·) be a nonnegative kernel on the open set <sup>Ω</sup> <sup>⊂</sup> <sup>C</sup>, and let the form ·, · on ˚H(K) be defined as in (4.1.5) and (4.1.6). Then the following statements hold:


$$\langle f, \mathsf{K}(\cdot, \mu)\varphi \rangle = (f(\mu), \varphi)\_{\mathfrak{G}}, \qquad \mu \in \Omega, \ \varphi \in \mathfrak{G}.$$

(iii) For λ ∈ Ω the point evaluation E(λ) : H(K) → G, f → E(λ)f = f(λ) is a continuous linear mapping and

$$\mathbb{K}(\lambda, \mu) = E(\lambda)E(\mu)^\*, \quad \lambda, \mu \in \Omega.$$

(iv) If the kernel K(·, ·) is holomorphic and uniformly bounded on every compact subset of Ω, then the functions in H(K) are holomorphic on Ω.

Proof. (i) Let (H(K),·, ·) be the Hilbert space that is obtained when one completes the pre-Hilbert space (˚H(K),·, ·). It will be shown that the elements in H(K) can be identified with G-valued functions on Ω. For this let f ∈ H(K) and fix some λ ∈ Ω. Consider the functional

$$
\Psi\_{f,\lambda} : \mathcal{G} \to \mathbb{C}, \qquad \varphi \mapsto \langle \mathsf{K}(\cdot, \lambda)\varphi, f \rangle .
$$

Then an application of the Cauchy–Schwarz inequality shows that

$$\begin{aligned} |\Psi\_{f,\lambda}(\varphi)|^2 &= |\langle \mathsf{K}(\cdot,\lambda)\varphi, f\rangle|^2 \\ &\le \|\mathsf{K}(\cdot,\lambda)\varphi\|^2 \|f\|^2 \\ &= \langle \mathsf{K}(\cdot,\lambda)\varphi, \mathsf{K}(\cdot,\lambda)\varphi\rangle \|f\|^2 \\ &= (\mathsf{K}(\lambda,\lambda)\varphi, \varphi)\_{\mathfrak{G}} \|f\|^2 \\ &\le \|\mathsf{K}(\lambda,\lambda)\| \|f\|^2 \|\varphi\|\_{\mathfrak{G}}^2, \end{aligned}$$

and hence Ψf,λ is continuous. By the Riesz representation theorem, there is a unique vector ψf,λ ∈ G such that

$$(\varphi, \psi\_{f,\lambda})\_{\mathcal{G}} = \Psi\_{f,\lambda}(\varphi) = \langle \mathsf{K}(\cdot, \lambda)\varphi, f\rangle, \qquad \varphi \in \mathcal{G}.$$

Let F(Ω, G) be the space of all G-valued functions defined on Ω, and consider the mapping

$$\iota: \mathfrak{H}(\mathsf{K}) \to \mathfrak{F}(\Omega, \mathcal{G}), \quad f \mapsto \iota(f), \quad \text{where} \quad \iota(f)(\lambda) := \psi\_{f,\lambda}. \tag{4.1.9}$$

It follows from the definition of ι and ψf,λ that

$$\left(\iota(f)(\lambda),\varphi\right)\_{\mathfrak{G}} = (\psi\_{f,\lambda},\varphi)\_{\mathfrak{G}} = \langle f, \mathsf{K}(\cdot,\lambda)\varphi\rangle,\tag{4.1.10}$$

and this equality also shows that ι is a linear mapping.

The mapping ι in (4.1.9) is injective. To see this, assume that ι(f)=0 for some f ∈ H(K). This means ι(f)(λ) = 0 for all λ ∈ Ω, and (4.1.10) implies f,K(·, λ)ϕ = 0 for all λ ∈ Ω and ϕ ∈ G. Since the linear span of the functions <sup>K</sup>(·, λ)<sup>ϕ</sup> forms the dense subspace ˚H(K) of <sup>H</sup>(K), it follows that <sup>f</sup> = 0, that is, <sup>ι</sup> is injective.

Observe that for <sup>f</sup> <sup>∈</sup> ˚H(K) it follows from (4.1.10) and the reproducing kernel property (4.1.7) that

$$\left(\iota(f)(\lambda),\varphi\right)\_{\mathfrak{G}} = \langle f, \mathsf{K}(\cdot,\lambda)\varphi\rangle = (f(\lambda),\varphi)\_{\mathfrak{G}}, \qquad \varphi \in \mathfrak{G},$$

for all <sup>λ</sup> <sup>∈</sup> Ω, and hence <sup>ι</sup>(f) = <sup>f</sup> for <sup>f</sup> <sup>∈</sup> ˚H(K). In other words, <sup>ι</sup> restricted to the dense subspace ˚H(K) is the identity, so that ι(˚H(K)) = ˚H(K).

Finally, item (i) follows when the subspace ran ι of F(Ω, G) is equipped with the scalar product induced by <sup>H</sup>(K), that is, for ˜f, <sup>g</sup>˜ <sup>∈</sup> ran <sup>ι</sup> define

$$
\langle \vec{f}, \vec{g} \rangle\_{\sim} := \langle \iota^{-1} \vec{f}, \iota^{-1} \vec{g} \rangle\_{\sim}
$$

Then ι is a unitary mapping from the Hilbert space (H(K),·, ·) onto the Hilbert space (ran ι,·, ·∼).

(ii) After identifying ι(f) and f ∈ H(K) as in (i), the reproducing kernel property is immediate from (4.1.10).

(iii) With the identification from (ii) observe that for all λ ∈ Ω and ϕ ∈ G the mapping

$$f \mapsto (f(\lambda), \varphi)\_{\mathbb{S}}\tag{4.1.11}$$

is continuous on H(K). In fact, this follows from (ii) and the computation

$$\begin{aligned} |(f(\lambda), \varphi)\_{\mathbb{S}}|^2 &= |\langle f, \mathsf{K}(\cdot, \lambda)\varphi \rangle|^2 \\ &\le \langle f, f \rangle \langle \mathsf{K}(\cdot, \lambda)\varphi, \mathsf{K}(\cdot, \lambda)\varphi \rangle \\ &= (\mathsf{K}(\lambda, \lambda)\varphi, \varphi)\_{\mathbb{S}} ||f||^2. \end{aligned}$$

For a fixed λ ∈ Ω the mapping

$$E(\lambda): \mathfrak{H}(\mathsf{K}) \to \mathfrak{G}, \quad f \mapsto E(\lambda)f = f(\lambda),$$

is closed. To see this, suppose that f<sup>n</sup> → f in H(K) and E(λ)f<sup>n</sup> → ψ in G. As dom E(λ) = H(K), it follows that f ∈ dom E(λ) and the continuity of (4.1.11) then yields

$$\begin{aligned} (\psi, \varphi)\_{\mathfrak{G}} &= \lim\_{n \to \infty} (E(\lambda)f\_n, \varphi)\_{\mathfrak{G}} = \lim\_{n \to \infty} (f\_n(\lambda), \varphi)\_{\mathfrak{G}} \\ &= (f(\lambda), \varphi)\_{\mathfrak{G}} = (E(\lambda)f, \varphi)\_{\mathfrak{G}} \end{aligned}$$

for all ϕ ∈ G. This shows E(λ)f = ψ and hence E(λ) is a closed operator. Since dom E(λ) = H(K), the closed graph theorem implies that E(λ) is continuous.

It remains to check the identity K(λ, μ) = E(λ)E(μ)<sup>∗</sup> for λ, μ ∈ Ω. For this let ϕ, ψ ∈ G, λ, μ ∈ Ω, and note that E(μ)∗ϕ ∈ H(K) is a function in the variable λ. Hence, E(λ)E(μ)∗ϕ = (E(μ)∗ϕ)(λ), the reproducing kernel property, and the symmetry of the kernel K(·, ·) imply

$$\begin{aligned} (E(\lambda)E(\mu)^\*\varphi,\psi)\_{\mathcal{G}} &= ((E(\mu)^\*\varphi)(\lambda),\psi)\_{\mathcal{G}} \\ &= \langle E(\mu)^\*\varphi,\mathsf{K}(\cdot,\lambda)\psi \rangle \\ &= (\varphi,E(\mu)\mathsf{K}(\cdot,\lambda)\psi)\_{\mathcal{G}} \\ &= (\varphi,\mathsf{K}(\mu,\lambda)\psi)\_{\mathcal{G}} \\ &= (\mathsf{K}(\lambda,\mu)\varphi,\psi)\_{\mathcal{G}}, \end{aligned}$$

which shows that K(λ, μ) = E(λ)E(μ)∗.

(iv) Let <sup>f</sup> <sup>∈</sup> <sup>H</sup>(K) and choose a sequence <sup>f</sup><sup>n</sup> <sup>∈</sup> ˚H(K) such that <sup>f</sup><sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>H</sup>(K). By assumption, the functions K(·, μ)ϕ are holomorphic on Ω, and hence so are the functions <sup>f</sup>n. Now let <sup>K</sup> <sup>⊂</sup> Ω be a compact set and let supλ∈<sup>K</sup> K(λ, λ) = MK. Then for λ ∈ K one gets

$$\begin{aligned} |(f(\lambda) - f\_n(\lambda), \varphi)\_{\mathcal{G}}| &= |\langle f - f\_n, \mathbb{K}(\cdot, \lambda)\varphi \rangle| \\ &\le ||f - f\_n|| |\mathbb{K}(\cdot, \lambda)\varphi|| \\ &\le ||f - f\_n|| (\mathbb{K}(\lambda, \lambda)\varphi, \varphi)\_{\mathcal{G}}^{1/2} \\ &\le M\_K^{1/2} ||f - f\_n|| ||\varphi||\_{\mathcal{G}}, \end{aligned}$$

and hence (fn(·), ϕ)<sup>G</sup> → (f(·), ϕ)<sup>G</sup> uniformly on K for all ϕ ∈ G. As K is an arbitrary compact subset of Ω, it follows that the function λ → (f(λ), ϕ)<sup>G</sup> is holomorphic on Ω for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. This implies that <sup>f</sup> is holomorphic. -

If the kernel K(·, ·) is holomorphic and uniformly bounded on every compact subset of Ω, then the elements in the reproducing kernel Hilbert space H(K) can be described as holomorphic functions from Ω to G which satisfy an additional boundedness condition involving the kernel K(·, ·).

**Theorem 4.1.6.** Let G be a Hilbert space, assume that K(·, ·) is a nonnegative holomorphic kernel on the open set <sup>Ω</sup> <sup>⊂</sup> <sup>C</sup> which is uniformly bounded on every compact subset of Ω, and let (H(K),·, ·) be the associated reproducing kernel Hilbert space. Then f ∈ H(K) with f ≤ γ if and only if f : Ω → G is holomorphic and the n × n matrix

$$\gamma^2 \left[ (\mathsf{K}(\nu\_i, \nu\_j)\varphi\_j, \varphi\_i)\_{\mathfrak{S}} \right]\_{i,j=1}^n - \left[ (f(\nu\_i), \varphi\_i)\_{\mathfrak{S}} \overline{(f(\nu\_j), \varphi\_j)}\_{\mathfrak{S}} \right]\_{i,j=1}^n \tag{4.1.12}$$

is nonnegative for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>, <sup>ν</sup>1,...,ν<sup>n</sup> <sup>∈</sup> <sup>Ω</sup>, and <sup>ϕ</sup>1,...,ϕ<sup>n</sup> <sup>∈</sup> <sup>G</sup>.

Proof. In order to prove the necessary and sufficient conditions it is helpful to note that the formulation of the condition (4.1.12) is based on the following identities. For the reproducing kernel K(·, ·) one has, as in (4.1.8),

$$\sum\_{i,j=1}^{n} \left( \left\langle \mathbb{K}(\nu\_i, \nu\_j)\varphi\_j, \varphi\_i \right\rangle\_{\mathcal{G}} \alpha\_j, \alpha\_i \right) = \left\| \sum\_{j=1}^{n} \alpha\_j \mathbb{K}(\cdot, \nu\_j) \varphi\_j \right\|^2 \tag{4.1.13}$$

for all <sup>ν</sup>1,...,ν<sup>n</sup> <sup>∈</sup> Ω, <sup>ϕ</sup>1,...,ϕ<sup>n</sup> <sup>∈</sup> <sup>G</sup>, and <sup>α</sup>1,...,α<sup>n</sup> <sup>∈</sup> <sup>C</sup>. Furthermore, for a function f : Ω → G which is holomorphic one has

$$\begin{split} \sum\_{i,j=1}^{n} \left( (f(\nu\_i), \varphi\_i)\_{\mathcal{G}} \overline{(f(\nu\_j), \varphi\_j)}\_{\mathcal{G}} \alpha\_j, \alpha\_i \right) \\ &= \sum\_{i,j=1}^{n} (f(\nu\_i), \alpha\_i \varphi\_i)\_{\mathcal{G}} \overline{(f(\nu\_j), \alpha\_j \varphi\_j)}\_{\mathcal{G}} \\ &= \left( \sum\_{i=1}^{n} (f(\nu\_i), \alpha\_i \varphi\_i)\_{\mathcal{G}} \right) \left( \sum\_{j=1}^{n} \overline{(f(\nu\_j), \alpha\_j \varphi\_j)}\_{\mathcal{G}} \right) \\ &= \left| \sum\_{j=1}^{n} (f(\nu\_j), \alpha\_j \varphi\_j)\_{\mathcal{G}} \right|^2 \\ &= \left| \sum\_{j=1}^{n} (\alpha\_j \varphi\_j, f(\nu\_j))\_{\mathcal{G}} \right|^2 \end{split} \tag{4.1.14}$$

for all <sup>ν</sup>1,...,ν<sup>n</sup> <sup>∈</sup> Ω, <sup>ϕ</sup>1,...,ϕ<sup>n</sup> <sup>∈</sup> <sup>G</sup>, and <sup>α</sup>1,...,α<sup>n</sup> <sup>∈</sup> <sup>C</sup>.

Assume that the function f : Ω → G is holomorphic and there exists γ > 0 such that the <sup>n</sup>×<sup>n</sup> matrix (4.1.12) is nonnegative for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>, <sup>ν</sup>1,...,ν<sup>n</sup> <sup>∈</sup> Ω, and ϕ1,...,ϕ<sup>n</sup> ∈ G. Together with (4.1.13) and (4.1.14), this implies that the relation from H(K) to C, spanned by the elements

$$\left\{ \sum\_{j=1}^{n} \alpha\_j \mathsf{K}(\cdot, \nu\_j) \varphi\_j, \sum\_{j=1}^{n} \left( \alpha\_j \varphi\_j, f(\nu\_j) \right)\_{\mathcal{G}} \right\},$$

where <sup>ν</sup>1,...,ν<sup>n</sup> <sup>∈</sup> Ω, <sup>ϕ</sup>1,...,ϕ<sup>n</sup> <sup>∈</sup> <sup>G</sup>, and <sup>α</sup>1,...,α<sup>n</sup> <sup>∈</sup> <sup>C</sup>, is a bounded functional with bound γ. Furthermore, it is densely defined on H(K), so that it admits a uniquely defined bounded linear extension defined on all of H(K). This functional is represented by a unique element F ∈ H(K) with F ≤ γ via the Riesz representation theorem. In particular this means that

$$(\varphi, f(\nu))\_{\mathfrak{G}} = \langle \mathsf{K}(\cdot, \nu)\varphi, F \rangle, \quad \nu \in \Omega, \ \varphi \in \mathfrak{G},$$

whereas by the reproducing kernel property one has

$$\langle \mathsf{K}(\cdot,\nu)\varphi, F\rangle = \overline{\langle F, \mathsf{K}(\cdot,\nu)\varphi\rangle} = \overline{\langle F(\nu), \varphi\rangle\_{\mathfrak{G}}} = \langle \varphi, F(\nu)\rangle\_{\mathfrak{G}}, \quad \nu \in \Omega, \ \varphi \in \mathfrak{G}.$$

Combining the last two identities one concludes that f = F, which gives that f ∈ H(K) and f ≤ γ.

For the converse statement, assume that f ∈ H(K) and f ≤ γ. Then the function f : Ω → G is holomorphic and for all ν1,...,ν<sup>n</sup> ∈ Ω, ϕ1,...,ϕ<sup>n</sup> ∈ G, and <sup>α</sup>1,...,α<sup>n</sup> <sup>∈</sup> <sup>C</sup> one has by means of (4.1.14) and the fact that <sup>f</sup> <sup>∈</sup> <sup>H</sup>(K)

$$\begin{split} \sum\_{i,j=1}^{n} \left( \langle f(\nu\_{i}), \varphi\_{i} \rangle\_{\mathcal{G}} \overline{\langle f(\nu\_{j}), \varphi\_{j} \rangle}\_{\mathcal{G}} \alpha\_{j}, \alpha\_{i} \rangle \right) \\ &= \left| \sum\_{j=1}^{n} \langle f(\nu\_{j}), \alpha\_{j} \varphi\_{j} \rangle\_{\mathcal{G}} \right|^{2} \\ &= \left| \sum\_{j=1}^{n} \langle f, \alpha\_{j} \mathsf{K}(\cdot, \nu\_{j}) \varphi\_{j} \rangle \right|^{2} \\ &= \left| \left\langle f, \sum\_{j=1}^{n} \alpha\_{j} \mathsf{K}(\cdot, \nu\_{j}) \varphi\_{j} \right\rangle \right|^{2} \\ &\leq \||f\||^{2} \left\| \sum\_{j=1}^{n} \alpha\_{j} \mathsf{K}(\cdot, \nu\_{j}) \varphi\_{j} \right\|^{2}. \end{split}$$

Together with (4.1.13) and f <sup>≤</sup> <sup>γ</sup> this gives (4.1.12). -

Due to the holomorphy it is sometimes convenient to consider a set of functions λ → K(λ, μ)ϕ, ϕ ∈ G, on a determining set of points μ ∈ Ω.

**Corollary 4.1.7.** Let K(·, ·) be a nonnegative holomorphic kernel on an open set <sup>Ω</sup> <sup>⊂</sup> <sup>C</sup> which is uniformly bounded on every compact subset of <sup>Ω</sup>, and let <sup>H</sup>(K) be the associated reproducing kernel Hilbert space. Let D ⊂ Ω be a set of points which has an accumulation point in each connected component of Ω. Then

$$\mathfrak{H}(\mathsf{K}) = \overline{\operatorname{span}}\left\{\lambda \mapsto \mathsf{K}(\lambda, \mu)\varphi : \mu \in \mathcal{D}, \,\varphi \in \mathcal{G}\right\}.$$

Proof. The inclusion (⊃) is obvious from (4.1.4). To show the inclusion (⊂), it suffices to verify that the linear space

$$\text{span}\left\{\lambda \mapsto \mathsf{K}(\lambda, \mu)\varphi : \mu \in \mathcal{D}, \,\varphi \in \mathcal{G}\right\}$$

is dense in H(K). Therefore, let f ∈ H(K) be orthogonal to this set. Then

$$0 = \langle f, \mathsf{K}(\cdot, \mu)\varphi \rangle = (f(\mu), \varphi)\_{\mathfrak{G}}$$

for all μ ∈ D and ϕ ∈ G, and hence f(μ) = 0 for all μ ∈ D. Since f ∈ H(K) is holomorphic on Ω, the assumption on D now implies that f(λ) = 0 for all λ ∈ Ω. Hence, f = 0 and the proof is complete. -

Let K(·, ·) be a nonnegative holomorphic kernel on an open set Ω. If Ω- <sup>⊂</sup> <sup>C</sup> is an open set such that Ω ⊂ Ω and if K- (·, ·) is a nonnegative holomorphic kernel on Ω extending K(·, ·), then the functions in the reproducing kernel Hilbert space H(K) may be seen as restrictions to Ω of the functions in the reproducing kernel Hilbert space H(K- ).

**Proposition 4.1.8.** Let K(·, ·) be a nonnegative holomorphic kernel on an open set <sup>Ω</sup> <sup>⊂</sup> <sup>C</sup> which is uniformly bounded on every compact subset of <sup>Ω</sup>. Assume that Ω- <sup>⊂</sup> <sup>C</sup> is an open set such that <sup>Ω</sup> <sup>⊂</sup> <sup>Ω</sup> and that K- (·, ·) is a nonnegative holomorphic kernel on Ω which is uniformly bounded on every compact subset of Ωand which is equal to K(·, ·) on Ω. Then

$$\mathfrak{H}(\mathsf{K}) = \left\{ f|\_{\Omega} : f \in \mathfrak{H}(\mathsf{K}') \right\}.$$

Proof. Consider the linear space of functions from Ω into G generated by K- (·, ·) via ˚H(K-

$$\bar{\mathfrak{H}}(\mathbb{K}') := \text{span}\left\{ \lambda \mapsto \mathbb{K}'(\lambda, \mu)\varphi \, : \, \mu \in \Omega', \, \varphi \in \mathcal{G} \right\}.$$

It is clear that the analogous linear space

$$\bar{\mathfrak{H}}(\mathsf{K}) := \text{span}\left\{ \lambda \mapsto \mathsf{K}(\lambda, \mu)\varphi \, : \, \mu \in \Omega, \, \varphi \in \mathcal{G} \right\}$$

is contained in ˚H(K- ) in the sense that each function λ → K(λ, μ)ϕ with μ ∈ Ω is the restriction to Ω of the function λ → K- (λ, μ)ϕ. Hence, a continuity argument shows the inclusion

$$
\mathfrak{H}(\mathsf{K}) \subset \left\{ f|\_{\Omega} : f \in \mathfrak{H}(\mathsf{K}') \right\}.
$$

For the opposite inclusion consider <sup>f</sup>|<sup>Ω</sup> : Ω <sup>→</sup> <sup>C</sup> for some <sup>f</sup> <sup>∈</sup> <sup>H</sup>(K- ), and set γ = f . Then, by Theorem 4.1.6, the matrix

$$\gamma^2 \left[ (\mathsf{K}'(\nu\_i, \nu\_j)\varphi\_j, \varphi\_i)\_{\mathfrak{G}} \right]\_{i,j=1}^n - \left[ (f(\nu\_i), \varphi\_i)\_{\mathfrak{G}}, \overline{(f(\nu\_j), \varphi\_j)\_{\mathfrak{G}}} \right]\_{i,j=1}^n$$

is nonnegative for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>, <sup>ν</sup>1,...,ν<sup>n</sup> <sup>∈</sup> <sup>Ω</sup>- , and ϕ1,...,ϕ<sup>n</sup> ∈ G. In particular,

$$\gamma^2 \left[ (\mathsf{K}(\nu\_i, \nu\_j)\varphi\_j, \varphi\_i)\_{\mathfrak{G}} \right]\_{i,j=1}^n - \left[ (f(\nu\_i), \varphi\_i)\_{\mathfrak{G}}, \overline{(f(\nu\_j), \varphi\_j)\_{\mathfrak{G}}} \right]\_{i,j=1}^n$$

is nonnegative for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>, <sup>ν</sup>1,...,ν<sup>n</sup> <sup>∈</sup> Ω, and <sup>ϕ</sup>1,...,ϕ<sup>n</sup> <sup>∈</sup> <sup>G</sup>. Another application of Theorem 4.1.6 implies <sup>f</sup>|<sup>Ω</sup> <sup>∈</sup> <sup>H</sup>(K). -

Under suitable circumstances multiplication of a given reproducing kernel by an operator function gives rise to a new reproducing kernel. In the following proposition this fact and the relation between the corresponding reproducing kernel Hilbert spaces are explained.

**Proposition 4.1.9.** Let G be a Hilbert space, assume that K(·, ·) is a nonnegative kernel on Ω, and let (H(K),·, ·) be the associated reproducing kernel Hilbert space. Let Φ:Ω → **B**(G) be such that 0 ∈ ρ(Φ(λ)) for all λ ∈ Ω. Then

$$\mathsf{K}\_{\Phi}(\lambda,\mu) = \Phi(\lambda)\mathsf{K}(\lambda,\mu)\Phi(\mu)^{\*}\tag{4.1.15}$$

is a nonnegative kernel on Ω and the corresponding reproducing Hilbert space (H(KΦ),·, ·Φ) is unitarily equivalent to H(K) via the mapping

$$\mathcal{M}\_{\Phi}: \mathfrak{H}(\mathsf{K}) \to \mathfrak{H}(\mathsf{K}\_{\Phi}), \qquad f \mapsto \Phi f.$$

Moreover, if K(·, ·) is holomorphic and uniformly bounded on every compact subset of Ω, and Φ is holomorphic, then also KΦ(·, ·) is holomorphic and uniformly bounded on every compact subset of Ω.

Proof. The definition (4.1.15) leads to the identity

$$\left( \left( \mathsf{K}\_{\Phi}(\lambda\_i, \lambda\_j) \varphi\_j, \varphi\_i \right)\_{\mathfrak{G}} \right)\_{i,j=1}^n = \left( (\mathsf{K}(\lambda\_i, \lambda\_j) \Phi(\lambda\_j)^\* \varphi\_j, \Phi(\lambda\_i)^\* \varphi\_i)\_{\mathfrak{G}} \right)\_{i,j=1}^n.$$

Hence, the nonnegativity of K(·, ·) implies that KΦ(·, ·) is a nonnegative kernel. Moreover, (4.1.15) shows that for all μ ∈ Ω and ϕ ∈ G

$$
\Phi(\cdot)\mathbb{K}(\cdot,\mu)\varphi = \Phi(\cdot)\mathbb{K}(\cdot,\mu)\Phi(\mu)^\*\Phi(\mu)^{-\*}\varphi = \mathbb{K}\_\Phi(\cdot,\mu)\Phi(\mu)^{-\*}\varphi.
$$

Hence, M<sup>Φ</sup> maps ˚H(K) onto ˚H(KΦ). The identity

$$\begin{aligned} \left< \Phi(\cdot) \mathbb{K}(\cdot,\mu) \varphi, \Phi(\cdot) \mathbb{K}(\cdot,\nu) \psi \right>\_{\Phi} &= \left< \mathbb{K}\_{\Phi}(\cdot,\mu) \Phi(\mu)^{-\*} \varphi, \mathbb{K}\_{\Phi}(\cdot,\nu) \Phi(\nu)^{-\*} \psi \right>\_{\Phi} \\ &= \left< \mathbb{K}\_{\Phi}(\nu,\mu) \Phi(\mu)^{-\*} \varphi, \Phi(\nu)^{-\*} \psi \right>\_{\mathcal{G}} \\ &= \left< \Phi(\nu) \mathbb{K}(\nu,\mu) \varphi, \Phi(\nu)^{-\*} \psi \right>\_{\mathcal{G}} \\ &= \left< \mathbb{K}(\nu,\mu) \varphi, \psi \right>\_{\mathcal{G}} \\ &= \left< \mathbb{K}(\cdot,\mu) \varphi, \mathbb{K}(\cdot,\nu) \psi \right>, \end{aligned}$$

which is valid for all μ, ν ∈ Ω and all ϕ, ψ ∈ G, shows that the mapping M<sup>Φ</sup> from ˚H(K) onto ˚H(KΦ) is an isometry. Its unique bounded linear extension gives a unitary mapping from H(K) onto H(KΦ). In order to see that this extension M<sup>Φ</sup> acts as multiplication by Φ on all functions in H(K), let f ∈ H(K) and choose a sequence <sup>f</sup><sup>n</sup> <sup>∈</sup> ˚H(K) such that <sup>f</sup><sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>H</sup>(K). By isometry, the sequence MΦf<sup>n</sup> converges to MΦf in H(KΦ). Observe that the approximating sequence (fn) satisfies for all ϕ ∈ G

$$\begin{aligned} \left< (\mathcal{M}\_{\Phi} f\_n)(\cdot), \mathbb{K}\_{\Phi}(\cdot, \mu)\varphi \right>\_{\Phi} &= \left< \Phi(\cdot) f\_n(\cdot), \mathbb{K}\_{\Phi}(\cdot, \mu)\varphi \right>\_{\Phi} \\ &= \left< \Phi(\mu) f\_n(\mu), \varphi \right>\_{\mathcal{G}} \\ &= \left< f\_n(\mu), \Phi(\mu)^\* \varphi \right>\_{\mathcal{G}} \\ &= \left< f\_n(\cdot), \mathbb{K}(\cdot, \mu)\Phi(\mu)^\* \varphi \right>. \end{aligned}$$

Hence, taking limits one sees that

$$
\left< (\mathcal{M}\_{\Phi} f)(\cdot), \mathbb{K}\_{\Phi}(\cdot, \mu) \varphi \right>\_{\Phi} = \left< f(\cdot), \mathbb{K}(\cdot, \mu) \Phi(\mu)^{\*} \varphi \right>,
$$

and therefore

$$\begin{aligned} \left( (\mathcal{M}\_{\Phi} f)(\mu), \varphi \right)\_{\mathfrak{G}} &= \left\langle (\mathcal{M}\_{\Phi} f)(\cdot), \mathbb{K}\_{\Phi}(\cdot, \mu)\varphi \right\rangle\_{\Phi} \\ &= \left\langle f(\cdot), \mathbb{K}(\cdot, \mu)\Phi(\mu)^{\*}\varphi \right\rangle \\ &= \left\langle f(\mu), \Phi(\mu)^{\*}\varphi \right\rangle\_{\mathfrak{G}} \\ &= \left\langle \Phi(\mu) f(\mu), \varphi \right\rangle\_{\mathfrak{G}} \end{aligned}$$

for all ϕ ∈ G. This shows (MΦf)(μ) = Φ(μ)f(μ) for all μ ∈ Ω.

The last assertion on the holomorphy and uniform boundedness of KΦ(·, ·) on compact subsets of Ω is clear. -

## **4.2 Realization of uniformly strict Nevanlinna functions**

The aim of this section is to show that every operator-valued uniformly strict Nevanlinna function can be realized as the Weyl function corresponding to a boundary triplet. The reproducing kernel Hilbert space associated with a given uniformly strict Nevanlinna function will serve as a model space. The uniqueness of the model is discussed as well.

Let G be a Hilbert space and let M be a **B**(G)-valued Nevanlinna function. The associated Nevanlinna kernel

$$
\mathbb{N}\_M(\cdot,\cdot) : \Omega \times \Omega \to \mathbf{B}(\mathcal{G}),
$$

with Ω = <sup>C</sup> \ <sup>R</sup> is defined by

$$\mathsf{N}\_{M}(\lambda,\mu) := \frac{M(\lambda) - M(\mu)^{\*}}{\lambda - \overline{\mu}}, \quad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}, \quad \lambda \neq \overline{\mu}, \tag{4.2.1}$$

and NM(λ, λ) = M- (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then clearly the kernel <sup>N</sup><sup>M</sup> is symmetric. The kernel N<sup>M</sup> is holomorphic, since

$$
\lambda \mapsto \mathbb{N}\_M(\lambda, \mu)
$$

is holomorphic for each <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Moreover, from the definition of <sup>N</sup><sup>M</sup> one sees immediately that

$$\mathsf{N}\_M(\lambda,\lambda) = \frac{\mathrm{Im}\,M(\lambda)}{\mathrm{Im}\,\lambda}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

and hence <sup>N</sup>M(λ, λ) <sup>≥</sup> 0, <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. In the next theorem it turns out that the kernel <sup>N</sup><sup>M</sup> is, in fact, nonnegative on <sup>C</sup> \ <sup>R</sup>. Note also that the kernel <sup>N</sup><sup>M</sup> is uniformly bounded on compact subsets of <sup>C</sup> \ <sup>R</sup> since

$$\|\mathbb{N}\_M(\lambda, \lambda)\| \le \frac{\|M(\lambda)\|}{|\mathrm{Im}\,\lambda|}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

**Theorem 4.2.1.** Let M be a **B**(G)-valued Nevanlinna function. Then the kernel N<sup>M</sup> in (4.2.1) is nonnegative.

Proof. The function M has the integral representation

$$M(\lambda) = \alpha + \lambda \beta + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{1 + t^2} \right) d\Sigma(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \tag{4.2.2}$$

with self-adjoint operators α, β ∈ **B**(G), β ≥ 0, and a nondecreasing self-adjoint operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) such that

$$\int\_{\mathbb{R}} \frac{d\Sigma(t)}{1+t^2} \in \mathbf{B}(\mathcal{G}),$$

where the integral in (4.2.2) converges in the strong topology; cf. Theorem A.4.2. For any <sup>n</sup> <sup>∈</sup> <sup>N</sup>, points <sup>λ</sup>1,...,λ<sup>n</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and elements <sup>ϕ</sup>1,...,ϕ<sup>n</sup> <sup>∈</sup> <sup>G</sup> it follows from (4.2.2) that

$$\begin{split} & \left( \left( \mathbb{N}\_{M} (\lambda\_{i}, \lambda\_{j}) \varphi\_{j}, \varphi\_{i} \right)\_{\mathcal{G}} \right)\_{i,j=1}^{n} \\ &= \left( \left( \beta \varphi\_{j}, \varphi\_{i} \right)\_{\mathcal{G}} \right)\_{i,j=1}^{n} + \left( \left( \left( \int\_{\mathbb{R}} \frac{1}{t-\lambda\_{i}} \, \frac{1}{t-\overline{\lambda}\_{j}} \, d\Sigma(t) \right) \varphi\_{j}, \varphi\_{i} \right)\_{\mathcal{G}} \right)\_{i,j=1}^{n}. \end{split} \tag{4.2.3}$$

The first matrix on the right-hand side in (4.2.3) is nonnegative as for any vector (x1,...,xn) <sup>∈</sup> <sup>C</sup><sup>n</sup> and <sup>ϕ</sup> <sup>=</sup> <sup>1</sup>xiϕ<sup>i</sup> the nonnegativity of the operator <sup>β</sup> implies

$$\left( \begin{pmatrix} (\beta \varphi\_1, \varphi\_1)\_{\mathcal{G}} & \cdots & (\beta \varphi\_n, \varphi\_1)\_{\mathcal{G}} \\ \vdots & & \vdots \\ (\beta \varphi\_1, \varphi\_n)\_{\mathcal{G}} & \cdots & (\beta \varphi\_n, \varphi\_n)\_{\mathcal{G}} \end{pmatrix} \begin{pmatrix} x\_1 \\ \vdots \\ x\_n \end{pmatrix}, \begin{pmatrix} x\_1 \\ \vdots \\ x\_n \end{pmatrix} \right) = \left( \beta \varphi, \varphi \right)\_{\mathcal{G}} \ge 0.$$

To see that the second matrix on the right-hand side in (4.2.3) is also nonnegative, first use Proposition A.3.7 and Proposition A.3.4 to obtain

$$\begin{split} & \left( \left( \left( \left( \int\_{\mathbb{R}} \frac{1}{t - \lambda\_i} \, \frac{1}{t - \overline{\lambda}\_j} \, d\Sigma(t) \right) \varphi\_j, \varphi\_i \right)\_{\mathbb{S}} \right)\_{i,j=1}^n \left( \begin{matrix} x\_1 \\ \vdots \\ x\_n \end{matrix} \right), \begin{pmatrix} x\_1 \\ \vdots \\ x\_n \end{pmatrix} \right) \\ &= \lim\_{a \to -\infty} \lim\_{b \to \infty} \sum\_{i,j=1}^n \int\_a^b \frac{1}{t - \lambda\_i} \frac{1}{t - \overline{\lambda}\_j} \, d\left( \Sigma(t) x\_j \varphi\_j, x\_i \varphi\_i \right)\_{\mathbb{S}}. \end{split} \tag{4.2.4}$$

It is clear that for any finite partition a = t<sup>0</sup> < t<sup>1</sup> < ··· < t<sup>k</sup> = b of a finite interval [a, b] one has

$$\begin{aligned} &\sum\_{l=1}^{k} \sum\_{i,j=1}^{n} \frac{1}{t\_l - \lambda\_i} \frac{1}{t\_l - \overline{\lambda}\_j} \left( (\Sigma(t\_l) - \Sigma(t\_{l-1})) x\_j \varphi\_j, x\_i \varphi\_i \right)\_{\mathcal{G}} \\ &= \sum\_{l=1}^{k} \left( \{\Sigma(t\_l) - \Sigma(t\_{l-1})\} \sum\_{j=1}^{n} \frac{x\_j \varphi\_j}{t\_l - \overline{\lambda}\_j}, \sum\_{i=1}^{n} \frac{x\_i \varphi\_i}{t\_l - \overline{\lambda}\_i} \right)\_{\mathcal{G}} \ge 0. \end{aligned}$$

and when max |t<sup>l</sup> − t<sup>l</sup>−<sup>1</sup>| tends to zero these finite Riemann–Stieltjes sums converge to

$$\sum\_{i,j=1}^n \int\_a^b \frac{1}{t-\lambda\_i} \frac{1}{t-\overline{\lambda}\_j} \, d\left(\Sigma(t)x\_j\varphi\_j, x\_i\varphi\_i\right)\_{\mathcal{G}} \ge 0.$$

Hence, also (4.2.4) is nonnegative; thus, both matrices on the right-hand side in (4.2.3) are nonnegative, and so is their sum, i.e., the kernel N<sup>M</sup> is nonnegative. -

According to Theorem 4.1.5, the nonnegative kernel N<sup>M</sup> gives rise to a Hilbert space of holomorphic G-valued functions, which will be denoted by H(NM), with inner product ·, ·; cf. Section 4.1. Recall that the reproducing kernel property

$$\langle f, \mathbb{N}\_M(\cdot, \mu)\varphi \rangle = (f(\mu), \varphi)\_{\mathbb{S}}, \qquad \varphi \in \mathbb{S}, \ \mu \in \mathbb{C} \ \backslash \mathbb{R}, \tag{4.2.5}$$

holds for all functions f ∈ H(NM). The main results in this section concern a Nevanlinna function M and the construction of a self-adjoint relation which represents M in a sense to be explained. The construction will involve the associated reproducing kernel space H(NM).

Let M be a (not necessarily uniformly strict) **B**(G)-valued Nevanlinna function. The first main objective in this section is the contruction of a minimal model in which the function

$$
\lambda \mapsto -(M(\lambda) + \lambda)^{-1}
$$

is realized as the compressed resolvent of a self-adjoint relation. The uniqueness of the construction will be discussed after the theorem. Note that the definition of the self-adjoint relation involves multiplication by the independent variable; however, the resulting functions do not necessarily belong to H(NM).

**Theorem 4.2.2.** Let M be a **B**(G)-valued Nevanlinna function and let H(NM) be the associated reproducing kernel Hilbert space. Denote by P<sup>G</sup> the orthogonal projection from H(NM)⊕G onto G and let ι<sup>G</sup> be the canonical embedding of G into H(NM)⊕G. Then

$$\tilde{A} = \left\{ \left\{ \begin{pmatrix} f \\ \varphi \end{pmatrix}, \begin{pmatrix} f' \\ -\varphi' \end{pmatrix} \right\} : \begin{matrix} f, f' \in \mathfrak{H}(\mathbb{N}\_M), \varphi, \varphi' \in \mathfrak{G}, \\ f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi' \end{matrix} \right\} \tag{4.2.6}$$

is a self-adjoint relation in the Hilbert space H(NM) ⊕ G and the compressed resolvent of <sup>A</sup> onto <sup>G</sup> is given by

$$P\_{\mathbb{G}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathbb{G}} = -(M(\lambda) + \lambda)^{-1}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.\tag{4.2.7}$$

Furthermore, the self-adjoint relation <sup>A</sup> satisfies the following minimality condition:

$$\mathfrak{H}(\mathbb{N}\_M) \oplus \mathcal{G} = \overline{\text{span}}\left\{ \mathcal{G}, \text{ran}\left(\tilde{A} - \lambda\right)^{-1} \iota\_{\mathcal{G}} : \lambda \in \mathbb{C} \mid \mathbb{R} \right\}.\tag{4.2.8}$$

Proof. Step 1. The relation <sup>A</sup> in (4.2.6) contains an essentially self-adjoint relation. Indeed, define the relation B in H(NM) ⊕ G by

$$B = \text{span}\left\{ \left\{ \begin{pmatrix} \mathsf{N}\_M(\cdot,\overline{\mu})\varphi\\ -\varphi \end{pmatrix}, \begin{pmatrix} \mu \mathsf{N}\_M(\cdot,\overline{\mu})\varphi\\ M(\mu)\varphi \end{pmatrix} \right\} : \varphi \in \mathcal{G}, \mu \in \mathbb{C} \backslash \mathbb{R} \right\}.$$

It follows from the definition of N<sup>M</sup> in (4.2.1) that

$$
\mu \mathsf{N}\_M(\xi, \overline{\mu}) \varphi - \xi \mathsf{N}\_M(\xi, \overline{\mu}) \varphi = M(\mu) \varphi - M(\xi) \varphi, \quad \varphi \in \mathfrak{G}.
$$

Therefore, one sees from (4.2.6) that <sup>B</sup> <sup>⊂</sup> <sup>A</sup>. It remains to show that <sup>B</sup> is essentially self-adjoint.

The symmetry of B is easily verified: it follows from the definition in (4.2.1) and the reproducing kernel property (4.2.5) that

$$\begin{split} \left( \left( \begin{matrix} \mu \mathsf{N}\_{M}(\cdot,\overline{\mu})\varphi \\ M(\mu)\varphi \end{matrix} \right), \left( \begin{matrix} \mathsf{N}\_{M}(\cdot,\overline{\mu})\psi \\ -\psi \end{matrix} \right) \right) - \left( \left( \begin{matrix} \mathsf{N}\_{M}(\cdot,\overline{\mu})\varphi \\ -\varphi \end{matrix} \right), \left( \begin{matrix} \nu \mathsf{N}\_{M}(\cdot,\overline{\mu})\psi \\ M(\nu)\psi \end{matrix} \right) \right) \\ = \langle \mu \mathsf{N}\_{M}(\cdot,\overline{\mu})\varphi, \mathsf{N}\_{M}(\cdot,\overline{\nu})\psi \rangle - \langle M(\mu)\varphi,\psi \rangle\_{\mathfrak{G}} \\ \qquad \qquad \qquad - \langle \mathsf{N}\_{M}(\cdot,\overline{\mu})\varphi, \nu \mathsf{N}\_{M}(\cdot,\overline{\nu})\psi \rangle + \langle \varphi, M(\nu)\psi \rangle\_{\mathfrak{G}} \\ = (\mu - \mathsf{P})(\mathsf{N}\_{M}(\overline{\nu},\overline{\mu})\varphi, \psi)\_{\mathfrak{G}} - (M(\mu)\varphi,\psi)\_{\mathfrak{G}} + \langle \varphi, M(\nu)\psi \rangle\_{\mathfrak{G}} = 0 \end{split}$$

for all ϕ, ψ <sup>∈</sup> <sup>G</sup> and all μ, ν <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. This identity implies that <sup>B</sup> is symmetric in H(NM) ⊕ G.

To see that B is essentially self-adjoint, it now suffices to establish that ran (<sup>B</sup> <sup>−</sup> <sup>λ</sup>0) is dense in <sup>H</sup>(NM) <sup>⊕</sup> <sup>G</sup> for arbitrary <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Observe that it follows from the definition that

$$\text{ran}\left(B - \lambda\_0\right) = \text{span}\left\{ \begin{pmatrix} (\mu - \lambda\_0)\mathbb{N}\_M(\cdot, \overline{\mu})\varphi\\ (M(\mu) + \lambda\_0)\varphi \end{pmatrix} : \varphi \in \mathcal{G}, \mu \in \mathbb{C} \backslash \mathbb{R} \right\}.$$

The choice μ = λ<sup>0</sup> together with the fact −λ<sup>0</sup> ∈ ρ(M(λ0)) (see Definition A.4.1) imply that ran (M(λ0) + λ0) = G and hence

$$\{0\} \oplus \mathcal{G} \subset \text{ran}\left(B - \lambda\_0\right). \tag{4.2.9}$$

Therefore, also the elements of the form

$$\begin{pmatrix} \mathsf{N}\_{M}(\cdot,\overline{\mu})\varphi\\ 0 \end{pmatrix}, \quad \varphi \in \mathcal{G}, \quad \mu \in \mathbb{C} \backslash \mathbb{R}, \quad \mu \neq \lambda\_{0},\tag{4.2.10}$$

belong to ran (B − λ0). Moreover, since the set

$$\text{span}\left\{\mathsf{N}\_{M}(\cdot,\mu)\varphi:\varphi\in\mathsf{G},\,\mu\in\mathbb{C}\,\,\middle\langle\,\mathbb{R},\,\mu\neq\lambda\_{0}\right\}\right\}$$

is dense in H(NM), see Corollary 4.1.7, it follows from (4.2.9) and (4.2.10) that ran (<sup>B</sup> <sup>−</sup> <sup>λ</sup>0) is dense in <sup>H</sup>(NM) <sup>⊕</sup> <sup>G</sup> for all <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

Now let B be the closure of the symmetric relation B. It is clear that B is symmetric and that ran (B−λ0) is closed (see Proposition 1.4.4 and Lemma 1.2.2). Hence, it follows from the above considerations that ran (B − λ0) = H(NM) ⊕ G, and Theorem 1.5.5 yields that B is self-adjoint in H(NM) ⊕ G.

Step 2. The relation <sup>A</sup> is self-adjoint. To prove this, it suffices to establish that the closure of <sup>B</sup> coincides with the relation <sup>A</sup>.

First one shows that the relation <sup>A</sup> is closed. To see this, let

$$\left\{ \begin{pmatrix} f\_n \\ \varphi\_n \end{pmatrix}, \begin{pmatrix} f'\_n \\ -\varphi'\_n \end{pmatrix} \right\} \to \left\{ \begin{pmatrix} f \\ \varphi \end{pmatrix}, \begin{pmatrix} f' \\ -\varphi' \end{pmatrix} \right\} \quad \text{in} \quad \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \mathcal{G} \end{pmatrix} \times \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \mathcal{G} \end{pmatrix},$$

where the sequence on the left-hand side belongs to <sup>A</sup>, so that

$$f\_n'(\xi) - \xi f\_n(\xi) = M(\xi)\varphi\_n - \varphi\_n', \qquad \xi \in \mathbb{C} \ \mathbb{R}.$$

Then taking limits in the last identity and using the continuity of point evaluation in H(NM), see Theorem 4.1.5, leads to the identity

$$f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi', \qquad \xi \in \mathbb{C} \ \mathbb{R}.$$

Therefore, the relation <sup>A</sup> is closed.

Since <sup>B</sup> <sup>⊂</sup> <sup>A</sup> and <sup>A</sup> is closed, one has <sup>B</sup> <sup>⊂</sup> <sup>A</sup>. Hence, to see that <sup>A</sup> <sup>=</sup> <sup>B</sup> it suffices to prove the inclusion <sup>A</sup> <sup>⊂</sup> <sup>B</sup>. As <sup>B</sup> is self-adjoint, it suffices to show <sup>A</sup> <sup>⊂</sup> <sup>B</sup>∗. For this let

$$\left\{ \begin{pmatrix} f \\ \varphi \\ \cdot \end{pmatrix}, \begin{pmatrix} f' \\ -\varphi' \\ \cdot \end{pmatrix} \right\} \in \tilde{A}.$$

Then f,f- ∈ H(NM), ϕ, ϕ-∈ G, and

$$f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi', \qquad \xi \in \mathbb{C} \backslash \mathbb{R}.\tag{4.2.11}$$

For an element in B of the form

$$\left\{ \begin{pmatrix} \mathsf{N}\_M(\cdot,\overline{\mu})\psi\\ -\psi \end{pmatrix}, \begin{pmatrix} \mu\mathsf{N}\_M(\cdot,\overline{\mu})\psi\\ M(\mu)\psi \end{pmatrix} \right\}.$$

with <sup>ψ</sup> <sup>∈</sup> <sup>G</sup> and some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> it follows that

$$\begin{aligned} \left( \begin{pmatrix} f' \\ -\varphi' \end{pmatrix}, \begin{pmatrix} \mathbb{N}\_M(\cdot,\overline{\mu})\psi \\ -\psi \end{pmatrix} \right) - \left( \begin{pmatrix} f \\ \varphi \end{pmatrix}, \begin{pmatrix} \mu \mathbb{N}\_M(\cdot,\overline{\mu})\psi \\ M(\mu)\psi \end{pmatrix} \right) \\ = \left( f'(\overline{\mu}) - \overline{\mu}f(\overline{\mu}) + \varphi' - M(\overline{\mu})\varphi, \psi \right)\_{\mathfrak{G}} \\ = \left( M(\overline{\mu})\varphi - \varphi' + \varphi' - M(\overline{\mu})\varphi, \psi \right)\_{\mathfrak{G}} = 0, \end{aligned}$$

where (4.2.11) with <sup>ξ</sup> <sup>=</sup> <sup>μ</sup> was used. This implies <sup>A</sup> <sup>⊂</sup> <sup>B</sup><sup>∗</sup> and thus <sup>A</sup> <sup>=</sup> <sup>B</sup>. Hence, <sup>A</sup> is self-adjoint in <sup>H</sup>(NM) <sup>⊕</sup> <sup>G</sup>.

Step 3. It remains to establish the identities (4.2.7) and (4.2.8). Both are direct consequences of (4.2.6). In fact, let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and note that

$$(\tilde{A} - \lambda)^{-1} = \left\{ \left\{ \begin{pmatrix} f' - \lambda f \\ -\varphi' - \lambda \varphi \end{pmatrix}, \begin{pmatrix} f \\ \varphi \end{pmatrix} \right\} : \begin{matrix} f, f' \in \mathfrak{H}(\mathbb{N}\_M), \varphi, \varphi' \in \mathcal{G}, \\ f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi' \end{matrix} \right\}$$

and hence

$$(\tilde{A} - \lambda)^{-1} \iota\_{\mathcal{G}} = \left\{ \left\{ -\varphi' - \lambda \varphi, \begin{pmatrix} f \\ \varphi \end{pmatrix} \right\} : \begin{aligned} f' &= \lambda f, \ f \in \mathfrak{H}(\mathbb{N}\_M), \varphi, \varphi' \in \mathcal{G}, \\ f'(\xi) - \xi f(\xi) &= M(\xi)\varphi - \varphi' \end{aligned} \right\}.$$

The condition f- = λf yields (λ − ξ)f(ξ) = M(ξ)ϕ − ϕ- , <sup>ξ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and setting ξ = λ one obtains ϕ- = M(λ)ϕ. Conversely, if (λ − ξ)f(ξ)=(M(ξ) − M(λ))ϕ and ϕ- = M(λ)ϕ and f- = λf, then f- (ξ) − ξf(ξ) = M(ξ)ϕ − ϕ for <sup>ξ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Therefore,

$$(\tilde{A} - \lambda)^{-1} \iota\_{\mathcal{G}} = \left\{ \left\{ -(M(\lambda) + \lambda)\varphi, \begin{pmatrix} f \\ \varphi \end{pmatrix} \right\} : \begin{array}{l} f \in \mathfrak{H}(\mathbb{N}\_M), \varphi \in \mathcal{G}, \\ (\lambda - \xi)f(\xi) = (M(\xi) - M(\lambda))\varphi \end{array} \right\}.$$

and this yields <sup>P</sup>G(A <sup>−</sup> <sup>λ</sup>)−1ι<sup>G</sup> <sup>=</sup> <sup>−</sup>(M(λ) + <sup>λ</sup>)−1; recall that <sup>−</sup><sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(M(λ)) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Hence, (4.2.7) is shown. Moreover, from (4.2.1) it follows that

$$\begin{aligned} \text{ran}\,P\_{\mathfrak{H}(\mathsf{N}\_M)}(\vec{A}-\lambda)^{-1}\iota\_{\mathfrak{G}} &= \left\{ f \in \mathfrak{H}(\mathsf{N}\_M) : (\lambda - \xi)f(\xi) = (M(\xi) - M(\lambda))\varphi, \varphi \in \mathfrak{G} \right\} \\ &= \left\{ -\mathsf{N}\_M(\xi, \overline{\lambda})\varphi : \varphi \in \mathfrak{G}, \xi \in \mathbb{C} \mid \mathbb{R}, \xi \neq \lambda \right\} \end{aligned}$$

and hence

$$\overline{\operatorname{span}}\left\{\operatorname{ran}P\_{\mathfrak{H}(\mathsf{N}\_M)}(\tilde{A}-\lambda)^{-1}\iota\_{\mathfrak{G}} : \,\lambda\in\mathbb{C}\,\bigvee\,\mathbb{R}\right\} = \mathfrak{H}(\mathsf{N}\_M)^{-1}$$

by Theorem 4.1.5 and Corollary 4.1.7. This implies (4.2.8). -

The model and the self-adjoint relation in Theorem 4.2.2 are unique up to unitary equivalence. This is a consequence of the following general equivalence result.

**Theorem 4.2.3.** Let G, H, and H be Hilbert spaces and let <sup>A</sup> and <sup>A</sup> be self-adjoint relations in the product spaces H ⊕ G and H- ⊕ G, respectively. Denote by P<sup>G</sup> and P- <sup>G</sup> the orthogonal projections from H ⊕ G and H- ⊕ G onto G, respectively, and let ι<sup>G</sup> and ι - <sup>G</sup> be the corresponding canonical embeddings. Assume that <sup>A</sup> satisfies the minimality condition

$$\mathfrak{H} \oplus \mathcal{G} = \overline{\text{span}} \left\{ \mathcal{G}, \text{ran} \left( \tilde{A} - \lambda \right)^{-1} \iota\_{\mathcal{G}} : \lambda \in \mathbb{C} \mid \mathbb{R} \right\} \tag{4.2.12}$$

and that <sup>A</sup>satisfies the minimality condition

$$\mathfrak{H}' \oplus \mathcal{G} = \overline{\text{span}}\left\{ \mathcal{G}, \text{ran}\left( \tilde{A}' - \lambda \right)^{-1} \iota\_{\mathcal{G}}' : \lambda \in \mathbb{C} \mid \mathbb{R} \right\}.\tag{4.2.13}$$

$$\square$$

Furthermore, assume that

$$P\_{\mathbb{S}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathbb{S}} = P\_{\mathbb{S}}'(\tilde{A}' - \lambda)^{-1} \iota\_{\mathbb{S}}', \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.\tag{4.2.14}$$

Then <sup>A</sup> and <sup>A</sup> are unitarily equivalent, that is, there exists a unitary operator U ∈ **B**(H ⊕ G, H- <sup>⊕</sup> <sup>G</sup>) such that <sup>A</sup>-<sup>=</sup> UAU <sup>∗</sup>.

Proof. Note that the elements of the form

$$\sum\_{j=1}^{n} \left( \alpha\_j \varphi\_j + \beta\_j (\tilde{A} - \lambda\_j)^{-1} \psi\_j \right),\tag{4.2.15}$$

where <sup>ϕ</sup><sup>j</sup> , ψ<sup>j</sup> <sup>∈</sup> <sup>G</sup>, <sup>α</sup><sup>j</sup> , β<sup>j</sup> <sup>∈</sup> <sup>C</sup>, <sup>λ</sup><sup>j</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> for <sup>j</sup> = 1,...,n, and <sup>n</sup> <sup>∈</sup> <sup>N</sup> are arbitrarily chosen, form a dense subspace of the Hilbert space H ⊕ G by the assumption (4.2.12). Likewise, the elements of the form

$$\sum\_{j=1}^{n'} \left( \alpha\_j' \varphi\_j' + \beta\_j' (\tilde{A}' - \lambda\_j')^{-1} \psi\_j' \right),\tag{4.2.16}$$

where ϕ- <sup>j</sup> , ψ- <sup>j</sup> ∈ G, α- <sup>j</sup> , β- <sup>j</sup> <sup>∈</sup> <sup>C</sup>, <sup>λ</sup>- <sup>j</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> for <sup>j</sup> = 1,...,n- , and n- <sup>∈</sup> <sup>N</sup> are arbitrarily chosen, form a dense subspace of the Hilbert space H- ⊕ G by the assumption (4.2.13).

Define the linear relation U from H ⊕ G to H- ⊕ G as the linear span of all pairs of the form

$$\left\{ \sum\_{j=1}^{n} \left( \alpha\_j \varphi\_j + \beta\_j (\tilde{A} - \lambda\_j)^{-1} \psi\_j \right), \sum\_{j=1}^{n} \left( \alpha\_j \varphi\_j + \beta\_j (\tilde{A}' - \lambda\_j)^{-1} \psi\_j \right) \right\},$$

where <sup>ϕ</sup><sup>j</sup> , ψ<sup>j</sup> <sup>∈</sup> <sup>G</sup>, <sup>α</sup><sup>j</sup> , β<sup>j</sup> <sup>∈</sup> <sup>C</sup>, <sup>λ</sup><sup>j</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> for <sup>j</sup> = 1,...,n, and <sup>n</sup> <sup>∈</sup> <sup>N</sup> are arbitrarily chosen. Then according to (4.2.15) and (4.2.16) the relation U has a dense domain and a dense range. To show that the relation U is isometric, i.e., h- = h for all {h, h- } ∈ H, one has to verify that

$$\begin{aligned} & \left( \sum\_{j=1}^n \left( \alpha\_j \varphi\_j + \beta\_j (\tilde{A}' - \lambda\_j)^{-1} \psi\_j \right), \sum\_{i=1}^n \left( \alpha\_i \varphi\_i + \beta\_i (\tilde{A}' - \lambda\_i)^{-1} \psi\_i \right) \right) \\ &= \left( \sum\_{j=1}^n \left( \alpha\_j \varphi\_j + \beta\_j (\tilde{A} - \lambda\_j)^{-1} \psi\_j \right), \sum\_{i=1}^n \left( \alpha\_i \varphi\_i + \beta\_i (\tilde{A} - \lambda\_i)^{-1} \psi\_i \right) \right). \end{aligned}$$

To see this, it suffices to observe that (4.2.14) implies

$$\begin{aligned} \left( (\tilde{A}' - \lambda\_j)^{-1} \psi\_j, \varphi\_i \right)\_{\mathcal{G}} &= \left( P'\_{\mathcal{G}} (\tilde{A}' - \lambda\_j)^{-1} \psi\_j, \varphi\_i \right)\_{\mathcal{G}} \\ &= \left( P\_{\mathcal{G}} (\tilde{A} - \lambda\_j)^{-1} \psi\_j, \varphi\_i \right)\_{\mathcal{G}} \\ &= \left( (\tilde{A} - \lambda\_j)^{-1} \psi\_j, \varphi\_i \right)\_{\mathcal{G}} \end{aligned}$$

and, likewise, by symmetry,

$$\left( \varphi\_j, (\tilde{A}' - \lambda\_i)^{-1} \psi\_i \right)\_{\mathfrak{G}} = \left( \varphi\_j, (\tilde{A} - \lambda\_i)^{-1} \psi\_i \right)\_{\mathfrak{G}}.$$

Moreover, using the resolvent identity, one sees that for λ<sup>j</sup> = λ<sup>i</sup> (4.2.14) implies

$$\begin{split} \left( (\tilde{A}' - \lambda\_j)^{-1} \psi\_j, (\tilde{A}' - \lambda\_i)^{-1} \psi\_i \right)\_{\mathcal{G}} \\ &= \left( (\tilde{A}' - \lambda\_j)^{-1} (\tilde{A}' - \overline{\lambda}\_i)^{-1} \psi\_j, \psi\_i \right)\_{\mathcal{G}} \\ &= (\lambda\_j - \overline{\lambda}\_i)^{-1} \left[ \left( (\tilde{A}' - \lambda\_j)^{-1} \psi\_j, \psi\_i \right)\_{\mathcal{G}} - \left( (\tilde{A}' - \overline{\lambda}\_i)^{-1} \psi\_j, \psi\_i \right)\_{\mathcal{G}} \right] \\ &= (\lambda\_j - \overline{\lambda}\_i)^{-1} \left[ \left( (\tilde{A} - \lambda\_j)^{-1} \psi\_j, \psi\_i \right)\_{\mathcal{G}} - \left( (\tilde{A} - \overline{\lambda}\_i)^{-1} \psi\_j, \psi\_i \right)\_{\mathcal{G}} \right] \\ &= \left( (\tilde{A} - \lambda\_j)^{-1} (\tilde{A} - \overline{\lambda}\_i)^{-1} \psi\_j, \psi\_i \right)\_{\mathcal{G}} \\ &= \left( (\tilde{A} - \lambda\_j)^{-1} \psi\_j, (\tilde{A} - \lambda\_i)^{-1} \psi\_i \right)\_{\mathcal{G}}, \end{split}$$

and a limit argument together with the continuity of the resolvent shows that the same is true in the case λ<sup>j</sup> = λi. Thus, the relation U is isometric; hence it is a well-defined isometric operator and the closure of U, denoted again by U, is a unitary operator from H ⊕ G onto H-⊕ G.

Next it will be shown that <sup>A</sup> and <sup>A</sup> are unitarily equivalent under U. To see this, one needs to show

$$U(\tilde{A} - \lambda)^{-1} = (\tilde{A}' - \lambda)^{-1}U \tag{4.2.17}$$

for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>; cf. Lemma 1.3.8. Since all operators involved are bounded, it suffices to check this identity on a dense set of H ⊕ G; thus, in fact, it suffices to check this only for the elements in (4.2.15). Observe that for elements in (4.2.15) with <sup>λ</sup> <sup>=</sup> <sup>λ</sup><sup>j</sup> and <sup>γ</sup><sup>j</sup> := <sup>β</sup><sup>j</sup> <sup>λ</sup>j−<sup>λ</sup> one has, again by the resolvent identity,

$$\begin{split} U(\tilde{A} - \lambda)^{-1} \left( \alpha\_{j} \varphi\_{j} + \beta\_{j} (\tilde{A} - \lambda\_{j})^{-1} \psi\_{j} \right) \\ = U \left( (\tilde{A} - \lambda)^{-1} \left( \alpha\_{j} \varphi\_{j} - \gamma\_{j} \psi\_{j} \right) + \gamma\_{j} (\tilde{A} - \lambda\_{j})^{-1} \psi\_{j} \right) \\ = (\tilde{A}' - \lambda)^{-1} \left( \alpha\_{j} \varphi\_{j} - \gamma\_{j} \psi\_{j} \right) + \gamma\_{j} (\tilde{A}' - \lambda\_{j})^{-1} \psi\_{j} \\ = (\tilde{A}' - \lambda)^{-1} \left( \alpha\_{j} \varphi\_{j} + \beta\_{j} (\tilde{A}' - \lambda\_{j})^{-1} \psi\_{j} \right) \\ = (\tilde{A}' - \lambda)^{-1} U \left( \alpha\_{j} \varphi\_{j} + \beta\_{j} (\tilde{A} - \lambda\_{j})^{-1} \psi\_{j} \right), \end{split}$$

and by the continuity of the resolvent the same relation holds also for λ = λ<sup>j</sup> . Thus, (4.2.17) has been established. -

The second main result in this section concerns a minimal model for uniformly strict Nevanlinna functions. By means of Theorem 4.2.2 it will be shown that every uniformly strict Nevanlinna function M is the Weyl function corresponding to a boundary triplet of a simple symmetric operator in the Hilbert space H(NM). After this result it will be shown that every boundary triplet producing the same Weyl

function is unitarily equivalent to the boundary triplet in this construction. Note that the description of (SM)<sup>∗</sup> involves functions which do not necessarily belong to H(NM); however the definition of S<sup>M</sup> concerns functions which remain in H(NM) after multiplication by the independent variable.

**Theorem 4.2.4.** Let M be a uniformly strict **B**(G)-valued Nevanlinna function and let H(NM) be the associated reproducing kernel Hilbert space. Then

$$S\_M = \left\{ \{f, f'\} \in \mathfrak{H}(\mathbb{N}\_M)^2 \, : \, f'(\xi) = \xi f(\xi) \right\} \tag{4.2.18}$$

is a closed simple symmetric operator in H(NM) and its adjoint is given by

$$(S\_M)^\* = \left\{ \{f, f'\} \in \mathfrak{H}(\mathbb{N}\_M)^2 : f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi', \varphi, \varphi' \in \mathcal{G} \right\}. \tag{4.2.19}$$

Moreover, the mappings

$$
\Gamma\_0 \widehat{f} = \varphi \quad \text{and} \quad \Gamma\_1 \widehat{f} = \varphi', \quad \widehat{f} \in (S\_M)^\*, \tag{4.2.20}
$$

are well defined and {G, Γ0, Γ1} is a boundary triplet for (SM)∗. The corresponding γ-field is given by

$$
\gamma(\lambda)\varphi = -\mathsf{N}\_M(\cdot,\overline{\lambda})\varphi \tag{4.2.21}
$$

and the corresponding Weyl function is given by M.

Proof. By Theorem 4.2.2, the relation

$$\tilde{A} = \left\{ \left\{ \begin{pmatrix} f \\ \varphi \end{pmatrix}, \begin{pmatrix} f' \\ -\varphi' \end{pmatrix} \right\} : \begin{matrix} f, f' \in \mathfrak{H}(\mathbb{N}\_M), \varphi, \varphi' \in \mathcal{G}, \\ f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi' \end{matrix} \right\}$$

is self-adjoint in H(NM) ⊕ G. The relations in (4.2.18) and (4.2.19) are defined in the component space H(NM); the condition that M is uniformly strict makes it possible to connect them with <sup>A</sup>.

Step 1. The relation S<sup>M</sup> in (4.2.18) is a closed symmetric operator with adjoint (SM)<sup>∗</sup> given by (4.2.19). First observe that the relation S<sup>M</sup> in (4.2.18) satisfies

$$S\_M = \tilde{A} \cap \left( \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \{0\} \end{pmatrix} \times \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \{0\} \end{pmatrix} \right), \tag{4.2.22}$$

when the space <sup>H</sup>(NM) × {0} is identified with <sup>H</sup>(NM). Since <sup>A</sup> is self-adjoint, (4.2.22) implies that S<sup>M</sup> is closed and symmetric, and it is clear from (4.2.18) that S<sup>M</sup> is an operator. In order to find the adjoint of SM, let the relation T<sup>M</sup> be defined by

$$T\_M = \left\{ \{f, f'\} \in \mathfrak{H}(\mathbb{N}\_M)^2 \, : \, f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi', \,\varphi, \varphi' \in \mathcal{G} \right\}.$$

Observe that

$$\{g, g'\} \in (T\_M)^\* \quad \Leftrightarrow \quad \left\{ \begin{pmatrix} g \\ 0 \end{pmatrix}, \begin{pmatrix} g' \\ 0 \end{pmatrix} \right\} \in \tilde{A}^\* = \tilde{A} \quad \Leftrightarrow \quad \{g, g'\} \in S\_M,$$

which leads to

$$(T\_M)^{\*\*} = (S\_M)^\*.$$

Hence, to conclude (4.2.19), it suffices to show that T<sup>M</sup> is closed. To see this, assume that {fn, f- <sup>n</sup>} ∈ <sup>T</sup><sup>M</sup> converges in <sup>H</sup>(NM)<sup>2</sup> to {f,f- }. Then there exists a sequence {ϕn, ϕ- <sup>n</sup>} ∈ <sup>G</sup><sup>2</sup> such that

$$f\_n'(\xi) - \xi f\_n(\xi) = M(\xi)\varphi\_n - \varphi\_n', \quad \xi \in \mathbb{C} \ \mathbb{R}.$$

By Theorem 4.1.5 (iii), the point evaluation is continuous, so that

fn(ξ) → f(ξ) and f- <sup>n</sup>(ξ) → f- (ξ).

Hence,

$$M(\xi)\varphi\_n - \varphi'\_n \to f'(\xi) - \xi f(\xi)$$

for all <sup>ξ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Taking <sup>ξ</sup> <sup>=</sup> <sup>λ</sup><sup>0</sup> and <sup>ξ</sup> <sup>=</sup> <sup>λ</sup><sup>0</sup> with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one sees that

$$\left(M(\lambda\_0) - M(\overline{\lambda}\_0)\right)\varphi\_n \to f'(\lambda\_0) - \lambda\_0 f(\lambda\_0) - \left(f'(\overline{\lambda}\_0) - \overline{\lambda}\_0 f(\overline{\lambda}\_0)\right)$$

and using that Im M(λ0) is boundedly invertible (since M is assumed to be uniformly strict), it follows that

$$
\varphi\_n \to \varphi \quad \text{and hence also} \quad \varphi'\_n \to \varphi'
$$

for some ϕ, ϕ-∈ G. Therefore,

$$f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi', \quad \xi \in \mathbb{C} \ \mathbb{R}.$$

In other words, {f,f- } ∈ TM, and hence T<sup>M</sup> is closed.

Step 2. The mappings in (4.2.20) form a boundary triplet for (SM)∗. First note that they are single-valued. Indeed, assume that {f,f- } ∈ (SM)<sup>∗</sup> is the trivial element, then M(ξ)ϕ − ϕ- = 0 for the corresponding elements ϕ, ϕ- ∈ G and all <sup>ξ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Taking <sup>ξ</sup> <sup>=</sup> <sup>λ</sup><sup>0</sup> and <sup>ξ</sup> <sup>=</sup> <sup>λ</sup><sup>0</sup> with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and using the fact that ker (Im M(λ0)) = {0}, one concludes that ϕ = 0 and ϕ-= 0.

To verify the abstract Green identity, let f <sup>=</sup> {f,f- }, <sup>g</sup> <sup>=</sup> {g, g- } ∈ (SM)∗. Then

$$f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi' \quad \text{and} \quad g'(\xi) - \xi g(\xi) = M(\xi)\psi - \psi'$$

for some ϕ, ϕ- , ψ, ψin <sup>G</sup> and some <sup>ξ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>; moreover it is clear that

$$\left\{ \begin{pmatrix} f \\ \varphi \end{pmatrix}, \begin{pmatrix} f' \\ -\varphi' \end{pmatrix} \right\}, \; \left\{ \begin{pmatrix} g \\ \psi \end{pmatrix}, \begin{pmatrix} g' \\ -\psi' \end{pmatrix} \right\} \in \tilde{A}.$$

Since <sup>A</sup> is self-adjoint and thus symmetric, one sees that

$$\langle f', g \rangle - \langle f, g' \rangle = (\varphi', \psi)\_{\mathfrak{F}} - (\varphi, \psi')\_{\mathfrak{F}} = (\Gamma\_1 \dot{f}, \Gamma\_0 \widehat{g})\_{\mathfrak{F}} - (\Gamma\_0 \dot{f}, \Gamma\_1 \widehat{g})\_{\mathfrak{F}}.$$

Thus, the abstract Green identity is satisfied. It remains to show that Γ maps onto <sup>G</sup><sup>2</sup>, which then implies that {G, <sup>Γ</sup>0, <sup>Γ</sup>1} is a boundary triplet for (SM)∗.

First, it will be shown that ran Γ is dense in <sup>G</sup><sup>2</sup>. Suppose that {α- , α} ∈ <sup>G</sup><sup>2</sup> is orthogonal to ran Γ, that is,

$$(\alpha', \varphi) + (\alpha, \varphi') = 0 \quad \text{for all} \quad \{\varphi, \varphi'\} \in \text{ran}\,\Gamma.$$

It follows that

$$\left\{ \begin{pmatrix} 0\\ \alpha \end{pmatrix}, \begin{pmatrix} 0\\ \alpha' \end{pmatrix} \right\} \in \tilde{A}^\* = \tilde{A}$$

and hence M(ξ)α + α- = 0 for all <sup>ξ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Now as above one concludes that α = 0 and α-= 0. Therefore, ran Γ is dense in G2.

Next, it will be shown that ran Γ is closed. For this consider again the selfadjoint relation <sup>A</sup> as a subspace of (H(NM) <sup>⊕</sup> <sup>G</sup>)<sup>2</sup> and define the orthogonal projections P and I − P by

$$P: \left(\begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \mathfrak{G} \end{pmatrix} \times \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \mathfrak{G} \end{pmatrix}\right) \to \left(\begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \{0\} \end{pmatrix} \times \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \{0\} \end{pmatrix}\right).$$

and

$$I - P : \left( \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \mathfrak{G} \end{pmatrix} \times \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \mathfrak{G} \end{pmatrix} \right) \to \left( \begin{pmatrix} \{0\} \\ \mathfrak{G} \end{pmatrix} \times \begin{pmatrix} \{0\} \\ \mathfrak{G} \end{pmatrix} \right),$$

respectively. Then PA <sup>=</sup> <sup>T</sup><sup>M</sup> = (SM)<sup>∗</sup> is closed and hence, by Lemma C.4,

$$
\tilde{A} \stackrel{\frown}{+} \ker P = \tilde{A} \stackrel{\frown}{+} \left( \begin{pmatrix} \{0\} \\ \mathcal{G} \end{pmatrix} \times \begin{pmatrix} \{0\} \\ \mathcal{G} \end{pmatrix} \right),
$$

is closed. Since <sup>A</sup> is self-adjoint, it follows from this and (1.3.5) that

$$
\widetilde{A} \xleftarrow{\sim} \left( \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \{0\} \end{pmatrix} \times \begin{pmatrix} \mathfrak{H}(\mathsf{N}\_M) \\ \{0\} \end{pmatrix} \right) = \widetilde{A} \xleftarrow{\sim} \ker \left( I - P \right)
$$

is closed. By Lemma C.4, the relation (<sup>I</sup> <sup>−</sup> <sup>P</sup>)A is closed. Observe that (<sup>I</sup> <sup>−</sup> <sup>P</sup>)A is given by

$$\mathcal{G}\left\{\{\varphi,-\varphi'\}\in\mathcal{G}^2:f'(\xi)-\xi f(\xi)=M(\xi)\varphi-\varphi',\ f,f'\in\mathfrak{H}(\mathsf{N}\_M)\right\}\tag{4.2.23}$$

and hence (4.2.23) is closed. In other words, ran (Γ0, −Γ1) is closed or, equivalently, ran (Γ0, Γ1) is closed.

Step 3. The γ-field and Weyl function corresponding to the boundary triplet (4.2.20) are given by γ in (4.2.21) and M, respectively, and the symmetric operator S<sup>M</sup> in (4.2.18) is simple.

To establish the assertion about <sup>M</sup> being the Weyl function, fix <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and let f <sup>λ</sup> <sup>∈</sup> <sup>N</sup> <sup>λ</sup>((SM)∗). Then <sup>f</sup>- <sup>λ</sup>(ξ) = λfλ(ξ) for all <sup>ξ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and by (4.2.19)

$$(\lambda - \xi)f\_{\lambda}(\xi) = M(\xi)\varphi\_{\lambda} - \varphi\_{\lambda}', \quad \xi \in \mathbb{C} \backslash \mathbb{R}, \tag{4.2.24}$$

where ϕ<sup>λ</sup> = Γ0f <sup>λ</sup> and ϕ- <sup>λ</sup> = Γ1f <sup>λ</sup>. The choice ξ = λ in (4.2.24) shows M(λ)ϕ<sup>λ</sup> = ϕ- λ and hence

$$M(\lambda)\Gamma\_0\bar{f}\_\lambda = \Gamma\_1\bar{f}\_\lambda.$$

As this is true for all f <sup>λ</sup> <sup>∈</sup> <sup>N</sup> <sup>λ</sup>((SM)∗) and all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, one concludes that <sup>M</sup> is the Weyl function corresponding to the boundary triplet (4.2.20).

To compute the γ-field and to show that S<sup>M</sup> is simple, assume again that f <sup>λ</sup> <sup>∈</sup> <sup>N</sup> <sup>λ</sup>((SM)∗). Then (4.2.24) with <sup>ξ</sup> <sup>=</sup> <sup>λ</sup> implies that

$$f\_{\lambda}(\xi) = \frac{M(\xi)\varphi\_{\lambda} - \varphi\_{\lambda}'}{\lambda - \xi} = -\frac{M(\xi) - M(\lambda)}{\xi - \lambda}\varphi\_{\lambda} = -\mathsf{N}\_{M}(\xi, \overline{\lambda})\varphi\_{\lambda}.$$

Hence, the γ-field corresponding to the boundary triplet (4.2.20) is given by (4.2.21), and since the elements fλ(·) = NM(·, λ)ϕ<sup>λ</sup> ∈ ker ((SM)<sup>∗</sup> − λ) form a dense set in the Hilbert space H(NM) (see Theorem 4.1.5 and Corollary 4.1.7), it follows from Corollary 3.4.5 that S<sup>M</sup> is simple. -

**Corollary 4.2.5.** Let {G, Γ0, Γ1} be the boundary triplet from Theorem 4.2.4 for (SM)∗. Then

$$A\_0 = \ker \Gamma\_0 = \left\{ \{f, f'\} \in \mathfrak{H}(\mathbb{N}\_M)^2 : f'(\xi) - \xi f(\xi) = \varphi', \varphi' \in \mathfrak{G} \right\}$$

and

A<sup>1</sup> = ker Γ<sup>1</sup> = - {f,f- } ∈ H(NM) <sup>2</sup> : f- (ξ) − ξf(ξ) = M(ξ)ϕ, ϕ ∈ G 

are self-adjoint relations in H(NM).

It follows from Theorem 4.2.4 that mul (SM)<sup>∗</sup> is the linear space spanned by all linear combinations M(·)ϕ − ϕ- , ϕ, ϕ- ∈ G, which belong to the Hilbert space H(NM). Likewise, it follows from Corollary 4.2.5 that mul A<sup>0</sup> consists of all constant functions in H(NM), while mul A<sup>1</sup> consists of all linear combinations M(·)ϕ, ϕ ∈ G, which belong to the Hilbert space H(NM).

The construction of the boundary triplet in Theorem 4.2.4 is unique up to unitary equivalence. More precisely, if S is a simple symmetric operator in a Hilbert space H and there is a boundary triplet for S<sup>∗</sup> with the same Weyl function M as in Theorem 4.2.4, then the boundary triplets are unitarily equivalent in the sense of Definition 2.5.14 (where G = G- ). This is a consequence of the following general equivalence result, which is a further specification of Theorem 4.2.3.

**Theorem 4.2.6.** Let S and S be closed simple symmetric operators in Hilbert spaces H and H- , respectively. Let {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} be boundary triplets for S<sup>∗</sup> and (S- )<sup>∗</sup> with γ-fields γ and γ- , respectively. Assume that the corresponding Weyl functions M and M coincide. Then the boundary triplets {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} are unitarily equivalent by means of a unitary operator U : H → H- which is determined by the property

$$U\gamma(\lambda) = \gamma'(\lambda), \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.\tag{4.2.25}$$

Proof. The basic idea of the proof follows the proof of Theorem 4.2.3. By assumption, the Weyl functions M and M of the boundary triplets {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} coincide. It follows from Proposition 2.3.6 (iii) that the corresponding γ-fields γ and γsatisfy the identity

$$\begin{split} \gamma(\mu)^\* \gamma(\lambda) &= \frac{M(\lambda) - M(\mu)^\*}{\lambda - \overline{\mu}} \\ &= \frac{M'(\lambda) - M'(\mu)^\*}{\lambda - \overline{\mu}} = \gamma'(\mu)^\* \gamma'(\lambda) \end{split} \tag{4.2.26}$$

for all λ, μ <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, <sup>λ</sup> <sup>=</sup> <sup>μ</sup>, and <sup>γ</sup>(λ)∗γ(λ) = <sup>γ</sup>- (λ)∗γ- (λ) follows by continuity. Define the linear relation U from H to Has the linear set of all pairs of the form

$$\left\{ \sum\_{j=1}^{n} \alpha\_j \gamma(\lambda\_j) \varphi\_j, \sum\_{j=1}^{n} \alpha\_j \gamma'(\lambda\_j) \varphi\_j \right\},$$

where <sup>ϕ</sup><sup>j</sup> <sup>∈</sup> <sup>G</sup>, <sup>α</sup><sup>j</sup> <sup>∈</sup> <sup>C</sup>, <sup>λ</sup><sup>j</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> for <sup>j</sup> = 1,...,n, and <sup>n</sup> <sup>∈</sup> <sup>N</sup> are arbitrarily chosen. It is clear from the definition of U that its domain is given by

$$\begin{aligned} \text{dom}\,U &= \text{span}\left\{\text{ran}\,\gamma(\lambda) : \lambda \in \mathbb{C} \mid \mathbb{R}\right\} \\ &= \text{span}\left\{\text{ker}\left((S)^{\*} - \lambda\right) : \lambda \in \mathbb{C} \mid \mathbb{R}\right\}, \end{aligned}$$

and its range is given by

$$\begin{aligned} \text{ran}\,U &= \text{span}\left\{ \text{ran}\,\gamma'(\lambda) : \lambda \in \mathbb{C} \mid \mathbb{R} \right\} \\ &= \text{span}\left\{ \text{ker}\left( (S')^{\*} - \lambda \right) : \lambda \in \mathbb{C} \mid \mathbb{R} \right\}, \end{aligned}$$

which are dense in H and H- , respectively, since S and S are both simple by assumption; cf. Definition 3.4.3 and Corollary 3.4.5. From (4.2.26) it follows that the relation U is isometric; hence, it is a well-defined isometric operator. Therefore, U extends by continuity to a unitary operator from H to H- , denoted again by U. From

$$U\gamma(\lambda)\varphi = \gamma'(\lambda)\varphi, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}, \quad \varphi \in \mathfrak{G}, \tag{4.2.27}$$

it follows that for each <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the restriction <sup>U</sup> : ker (S<sup>∗</sup> <sup>−</sup> <sup>λ</sup>) <sup>→</sup> ker ((S- )<sup>∗</sup> − λ) is unitary. Thus, (4.2.27) implies by Proposition 2.3.6 (ii) that

$$\begin{aligned} \Gamma\{\gamma(\lambda)\varphi,\lambda\gamma(\lambda)\varphi\} &= \{\varphi,M(\lambda)\varphi\} \\ &= \{\varphi,M'(\lambda)\varphi\} \\ &= \Gamma'\{\gamma'(\lambda)\varphi,\lambda\gamma'(\lambda)\varphi\} \\ &= \Gamma'\{U\gamma(\lambda)\varphi,\lambda U\gamma(\lambda)\varphi\}, \end{aligned}$$

and, in particular,

$$\begin{aligned} \Gamma\_0 \{ \gamma(\lambda)\varphi, \lambda\gamma(\lambda)\varphi \} &= \Gamma\_0' \{ U\gamma(\lambda)\varphi, \lambda U\gamma(\lambda)\varphi \}, \\ \Gamma\_1 \{ \gamma(\lambda)\varphi, \lambda\gamma(\lambda)\varphi \} &= \Gamma\_1' \{ U\gamma(\lambda)\varphi, \lambda U\gamma(\lambda)\varphi \}, \end{aligned} \tag{4.2.28}$$

for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>.

Now let A<sup>0</sup> = ker Γ<sup>0</sup> and A- <sup>0</sup> = ker Γ- 0. Then the property (4.2.27) and Proposition 2.3.2 (ii) imply

$$\begin{aligned} U\left(I + (\lambda - \mu)(A\_0 - \lambda)^{-1}\right)\gamma(\mu) &= U\gamma(\lambda) \\ &= \gamma'(\lambda) \\ &= \left(I + (\lambda - \mu)(A\_0' - \lambda)^{-1}\right)\gamma'(\mu) \\ &= \left(I + (\lambda - \mu)(A\_0' - \lambda)^{-1}\right)U\gamma(\mu) \end{aligned}$$

for all λ, μ <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and hence

$$U(A\_0 - \lambda)^{-1}\gamma(\mu) = (A\_0' - \lambda)^{-1}U\gamma(\mu), \qquad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}.$$

Since <sup>S</sup> is simple, one sees that span {ran <sup>γ</sup>(μ) : <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>} is a dense subspace of H and hence

$$U(A\_0 - \lambda)^{-1} = (A\_0' - \lambda)^{-1}U, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

and

$$U(A\_0 - \lambda)^{-1}U^\* = (A\_0' - \lambda)^{-1}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R},\tag{4.2.29}$$

follow. Therefore, by Lemma 1.3.8, the self-adjoint relations A- <sup>0</sup> and A<sup>0</sup> are unitarily equivalent, that is,

$$A\_0' = \left\{ \left\{ Uf, Uf' \right\} : \left\{ f, f' \right\} \in A\_0 \right\}. \tag{4.2.30}$$

This immediately yields

$$
\Gamma\_0' \{ Uf, Uf' \} = 0 \quad \text{and} \quad \Gamma\_0 \{ f, f' \} = 0, \qquad \{ f, f' \} \in A\_0. \tag{4.2.31}
$$

Furthermore, each {f,f- } ∈ A<sup>0</sup> can be written as

$$\{f, f'\} = \left\{ (A\_0 - \lambda)^{-1} U^\* g, \left( I + \lambda (A\_0 - \lambda)^{-1} \right) U^\* g \right\}.$$

for some g ∈ H- , so that by means of (4.2.29), Proposition 2.3.2 (iv), and (4.2.27) one obtains that

$$\begin{aligned} \Gamma\_1'(Uf, Uf') &= \Gamma\_1'\{U(A\_0-\lambda)^{-1}U^\*g, U\{I+\lambda(A\_0-\lambda)^{-1}\}U^\*g\} \\ &= \Gamma\_1'\{(A\_0'-\lambda)^{-1}g, \{I+\lambda(A\_0'-\lambda)^{-1}\}g\} \\ &= \gamma'(\overline{\lambda})^\*g \\ &= \gamma(\overline{\lambda})^\*U^\*g \\ &= \Gamma\_1\{(A\_0-\lambda)^{-1}U^\*g, \{I+\lambda(A\_0-\lambda)^{-1}\}U^\*g\}. \end{aligned}$$

Therefore,

$$
\Gamma\_1' \{ Uf, Uf' \} = \Gamma\_1 \{ f, f' \}, \qquad \{ f, f' \} \in A\_0. \tag{4.2.32}
$$

To see that the boundary triplets are unitarily equivalent, first recall that A<sup>0</sup> and A- <sup>0</sup> are unitarily equivalent, see (4.2.30), and that

$$\dot{\mathfrak{M}}\_{\lambda}((S')^\*) = \left\{ \{Uf\_{\lambda}, \lambda Uf\_{\lambda}\} : \{f\_{\lambda}, \lambda f\_{\lambda}\} \in S^\* \right\};$$

cf. (4.2.27). The direct sum decompositions

$$(S')^\* = A\_0' \hat{+} \mathfrak{N}\_\lambda((S')^\*) \quad \text{and} \quad S^\* = A\_0 \hat{+} \mathfrak{N}\_\lambda(S^\*) \tag{4.2.33}$$

for λ ∈ ρ(A0) = ρ(A- <sup>0</sup>) from Theorem 1.7.1 now show that

$$(S')^\* = \left\{ \{Uf, Uf'\} : \{f, f'\} \in S^\* \right\},\tag{4.2.34}$$

so that S<sup>∗</sup> and (S- )∗ are unitarily equivalent. It follows from (4.2.33) and the equalities (4.2.28), (4.2.31), and (4.2.32) that

$$
\Gamma\_0' \{ Uf, Uf' \} = \Gamma\_0 \{ f, f' \} \quad \text{and} \quad \Gamma\_1' \{ Uf, Uf' \} = \Gamma\_1 \{ f, f' \}, \quad \{ f, f' \} \in S^\*.
$$

Together with (4.2.34) this shows that the boundary triplets are unitarily equivalent. -

Let M be a uniformly strict **B**(G)-valued Nevanlinna function and consider the corresponding model in Theorem 4.2.4. Denote the boundary triplet (4.2.20) for (SM)<sup>∗</sup> in this model by {G,(ΓM)0,(ΓM)1}:

$$(\Gamma\_M)\_0 \widehat{f} = \varphi \quad \text{and} \quad (\Gamma\_M)\_1 \widehat{f} = \varphi', \qquad \widehat{f} = \{f, f'\} \in (S\_M)^\*. \tag{4.2.35}$$

According to Theorem 2.5.1 and Proposition 2.5.3 every operator matrix

$$\mathcal{W} = \begin{pmatrix} W\_{11} & W\_{12} \\ W\_{21} & W\_{22} \end{pmatrix} \in \mathbf{B}(\mathcal{G} \times \mathcal{G}, \mathcal{G} \times \mathcal{G}) \tag{4.2.36}$$

with the properties (2.5.1) gives rise to a boundary triplet {G,(ΓM)- 0,(ΓM)- <sup>1</sup>} for (SM)<sup>∗</sup> via (ΓM)-= WΓM, that is,

$$
\begin{pmatrix} (\Gamma\_M)'\_0\\ (\Gamma\_M)'\_1 \end{pmatrix} = \begin{pmatrix} W\_{11} & W\_{12} \\ W\_{21} & W\_{22} \end{pmatrix} \begin{pmatrix} (\Gamma\_M)\_0\\ (\Gamma\_M)\_1 \end{pmatrix}, \tag{4.2.37}
$$

and the corresponding γ-field and Weyl function are then given by

$$
\gamma'(\lambda) = \gamma(\lambda) \left( W\_{11} + W\_{12} M(\lambda) \right)^{-1} \tag{4.2.38}
$$

and

$$M'(\lambda) = \left(W\_{21} + W\_{22}M(\lambda)\right) \left(W\_{11} + W\_{12}M(\lambda)\right)^{-1}.\tag{4.2.39}$$

The function M- = W[M] in (4.2.39), being a Weyl function, is a uniformly strict **B**(G)-valued Nevanlinna function. Let H(NM- ) be the associated reproducing kernel Hilbert space. Then according to Theorem 4.2.4

$$S\_{M'} = \left\{ \{F, F'\} \in \mathfrak{H}(\mathbb{N}\_{M'})^2 \, : \, F'(\xi) = \xi F(\xi) \right\}.$$

is a closed simple symmetric operator in H(NM-) and its adjoint is given by

$$(S\_{M'})^\* = \left\{ \{F, F'\} \in \mathfrak{H}(\mathbb{N}\_{M'})^2 : F'(\xi) - \xi F(\xi) = M'(\xi)\psi - \psi', \,\psi, \psi' \in \mathcal{G} \right\}. \tag{4.2.40}$$

The corresponding boundary triplet {G,(Γ<sup>M</sup>- )0,(Γ<sup>M</sup>-)1} is given by

$$(\Gamma\_{M'})\_0 \widehat{F} = \psi, \quad (\Gamma\_{M'})\_1 \widehat{F} = \psi', \qquad \widehat{F} = \{F, F'\} \in (S\_{M'})^\*, \tag{4.2.41}$$

and according to Theorem 4.2.4 the corresponding Weyl function is M- . In the next proposition it will explained how this model for M is connected with the space H(NM) and the transformed boundary triplet {G,(ΓM)- 0,(ΓM)- <sup>1</sup>} for (SM)<sup>∗</sup> in (4.2.37). The unitary map Φ in (4.2.43) below provides the unitary equivalence between the boundary triplets in the sense of Theorem 4.2.6.

**Proposition 4.2.7.** Let M be the Weyl function in Theorem 4.2.4 with boundary triplet {G,(ΓM)0,(ΓM)1} and let W be of the form (4.2.36) with the properties in (2.5.1) so that {G,(ΓM)- 0,(ΓM)- <sup>1</sup>} in (4.2.37) is a boundary triplet for (SM)<sup>∗</sup> with corresponding Weyl function M- . Then the kernels N<sup>M</sup> and NMare connected via

> NM- (λ, μ) = Φ(λ)NM(λ, μ)Φ(μ) <sup>∗</sup>, (4.2.42)

where Φ : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) is a holomorphic function given by

$$\Phi(\lambda) = \left(W\_{11}^\* + M(\lambda)W\_{12}^\*\right)^{-1}.\tag{4.2.43}$$

Furthermore, f <sup>=</sup> {f,f- } ∈ (SM)<sup>∗</sup> if and only if <sup>F</sup> <sup>=</sup> {Φf, <sup>Φ</sup>f- } ∈ (SM- )∗, and the boundary triplets in (4.2.37) and (4.2.41) are connected via

$$(\Gamma\_M')\_0 \widehat{f} = (\Gamma\_{M'})\_0 \widehat{F} \quad \text{and} \quad (\Gamma\_M')\_1 \widehat{f} = (\Gamma\_{M'})\_1 \widehat{F} \tag{4.2.44}$$

for f <sup>∈</sup> (SM)<sup>∗</sup> and <sup>F</sup> <sup>=</sup> {Φf, <sup>Φ</sup>f- } ∈ (SM-)∗.

Proof. To establish (4.2.42), note that

$$\mathsf{N}\_{M'}(\lambda,\mu) = \frac{M'(\lambda) - M'(\mu)^\*}{\lambda - \overline{\mu}} = \frac{M'(\overline{\mu}) - M'(\overline{\lambda})^\*}{\overline{\mu} - \lambda} = \gamma'(\overline{\lambda})^\* \gamma'(\overline{\mu}),$$

and hence, in view of (4.2.38) and (4.2.43),

$$\begin{aligned} \mathsf{N}\_{M'}(\lambda,\mu) &= \left(W\_{11}^\* + M(\lambda)W\_{12}^\*\right)^{-1} \gamma(\overline{\lambda})^\* \gamma(\overline{\mu}) \left(W\_{11} + W\_{12}M(\overline{\mu})\right)^{-1} \\ &= \Phi(\lambda)\gamma(\overline{\lambda})^\* \gamma(\overline{\mu})\Phi(\mu)^\* \\ &= \Phi(\lambda)\mathsf{N}\_M(\lambda,\mu)\Phi(\mu)^\* \end{aligned}$$

for all λ, μ <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

Therefore, according to Proposition 4.1.9, each {F, F- } ∈ H(NM- )<sup>2</sup> is of the form

$$\{F, F'\} = \{\Phi f, \Phi f'\}, \quad \{f, f'\} \in \mathfrak{H}(\mathbb{N}\_M)^2,\tag{4.2.45}$$

and conversely. Let {F, F- } ∈ H(NM- )<sup>2</sup> and {f,f- } ∈ <sup>H</sup>(NM)<sup>2</sup> be connected by (4.2.45), then

$$F'(\xi) - \xi F(\xi) = M'(\xi)\psi - \psi' \quad \Leftrightarrow \quad f'(\xi) - \xi f(\xi) = M(\xi)\varphi - \varphi', \tag{4.2.46}$$

where ϕ, ϕ- ∈ G and ψ,ψ-∈ G are related by

$$
\begin{pmatrix} \varphi \\ \varphi' \end{pmatrix} = \mathcal{W}^{-1} \begin{pmatrix} \psi \\ \psi' \end{pmatrix} = \begin{pmatrix} W\_{22}^\* & -W\_{12}^\* \\ -W\_{21}^\* & W\_{11}^\* \end{pmatrix} \begin{pmatrix} \psi \\ \psi' \end{pmatrix}. \tag{4.2.47}
$$

In fact, if F- (ξ)−ξF(ξ) = M- (ξ)ψ −ψ- , then it follows from (4.2.45), (4.2.39), and (4.2.43) that

$$\begin{split} f'(\xi) - \xi f(\xi) &= \Phi(\xi)^{-1} \left( F'(\xi) - \xi F(\xi) \right) \\ &= \Phi(\xi)^{-1} (M'(\xi)\psi - \psi') \\ &= \Phi(\xi)^{-1} (M'(\tilde{\xi})^\*\psi - \psi') \\ &= \Phi(\xi)^{-1} \left( (W\_{11}^\* + M(\xi)W\_{12}^\*)^{-1} (W\_{21}^\* + M(\xi)W\_{22}^\*) \psi - \psi' \right) \\ &= (W\_{21}^\* + M(\xi)W\_{22}^\*) \psi - (W\_{11}^\* + M(\xi)W\_{12}^\*) \psi' \\ &= M(\xi) (W\_{22}^\* \psi - W\_{12}^\* \psi') - (-W\_{21}^\* \psi + W\_{11}^\* \psi') \\ &= M(\xi)\varphi - \varphi', \end{split}$$

where (4.2.47) was used in the last equality. Conversely, if {f,f- } ∈ <sup>H</sup>(NM)<sup>2</sup> and f- (ξ) − ξf(ξ) = M(ξ)ϕ − ϕ- , then a similar computation shows that {F, F- } in (4.2.45) satisfy F- (ξ) − ξF(ξ) = M- (ξ)ψ − ψ with ψ,ψfrom (4.2.47).

Comparing (4.2.19) and (4.2.40), it follows from the equivalence (4.2.46) that

$$(S\_{M'})^\* = \{ \{ \Phi f, \Phi f' \} : \{ f, f' \} \in (S\_M)^\* \}.$$

Moreover, from (4.2.35) and the model in Theorem 4.2.4 one then concludes

$$(\Gamma\_M)\_0 \widehat{f} = \varphi = W\_{22}^\* \psi - W\_{12}^\* \psi' \quad \text{and} \quad (\Gamma\_M)\_1 \widehat{f} = \varphi' = -W\_{21}^\* \psi + W\_{11}^\* \psi'$$

for f <sup>∈</sup> (SM)∗, that is,

$$
\begin{pmatrix} (\Gamma\_M)\_0 \widehat{f} \\ (\Gamma\_M)\_1 \widehat{f} \end{pmatrix} = \begin{pmatrix} W\_{22}^\* & -W\_{12}^\* \\ -W\_{21}^\* & W\_{11}^\* \end{pmatrix} \begin{pmatrix} \psi \\ \psi' \end{pmatrix}, \qquad \widehat{f} \in (S\_M)^\*, \ \widehat{f}
$$

or, equivalently,

$$
\begin{pmatrix} W\_{11} & W\_{12} \\ W\_{21} & W\_{22} \end{pmatrix} \begin{pmatrix} (\Gamma\_M)\_0 \widehat{f} \\ (\Gamma\_M)\_1 \widehat{f} \end{pmatrix} = \begin{pmatrix} \psi \\ \psi' \end{pmatrix}, \qquad \widehat{f} \in (S\_M)^\*;
$$

cf. (2.5.1). The result (4.2.44) now follows from (4.2.37) and (4.2.41). -

Finally, note that in Theorem 4.2.2 a model was constructed for a **B**(G) valued Nevanlinna function M. Under the extra condition that the Nevanlinna function M is uniformly strict, Theorem 4.2.4 provides by means of this model a boundary triplet for (SM)<sup>∗</sup> for which M is the Weyl function. Observe that with this boundary triplet the self-adjoint relation <sup>A</sup> in Theorem 4.2.2 in the Hilbert space H(NM)⊕G can be written by means of (4.2.19) and (4.2.20) in the alternative way

$$\tilde{A} = \left\{ \left\{ \begin{pmatrix} f \\ \Gamma\_0 \widehat{f} \end{pmatrix}, \begin{pmatrix} f' \\ -\Gamma\_1 \widehat{f} \end{pmatrix} \right\} : \widehat{f} = \{f, f'\} \in (S\_M)^\* \right\}.$$

This representation is in fact the counterpart of the observations concerning Weyl functions in Proposition 2.7.8.

## **4.3 Realization of scalar Nevanlinna functions via** *L***<sup>2</sup>-space models**

In the case of a scalar Nevanlinna function M one may also construct a minimal model via the corresponding integral representation

$$M(\lambda) = \alpha + \beta \lambda + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{1 + t^2} \right) d\sigma(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{4.3.1}$$

Here the constants α, β, and the nondecreasing function σ satisfy

$$
\alpha \in \mathbb{R}, \quad \beta \ge 0, \quad \int\_{\mathbb{R}} \frac{1}{1 + t^2} \, d\sigma(t) < \infty. \tag{4.3.2}
$$

It is a consequence of this representation that

$$\operatorname{Im} M(\lambda) = \beta + \int\_{\mathbb{R}} \frac{1}{|t - \lambda|^2} \, d\sigma(t), \quad \lambda \in \mathbb{C} \,\backslash \mathbb{R}.$$

Therefore, a scalar Nevanlinna function M is equal to the real constant α if and only if M is not uniformly strict, i.e., if and only if Im M(λ) = 0 for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Under the assumption that the Nevanlinna function is not constant, a model involving the integral representation is constructed in this section. Moreover, a concrete natural isomorphism between the new model space and the reproducing kernel Hilbert space H(NM) in Theorem 4.2.4 will be given.

The new model is build in the Hilbert space L<sup>2</sup> dσ(R) consisting of all (equivalence classes of) complex dσ-measurable functions f such that <sup>R</sup> |f| <sup>2</sup> dσ < <sup>∞</sup>, equipped with the scalar product

$$(f,g)\_{L^2\_{d\sigma}(\mathbb{R})} := \int\_{\mathbb{R}} f(t)\overline{g(t)} \, d\sigma(t), \qquad f, g \in L^2\_{d\sigma}(\mathbb{R}).$$

The following observations will be used in the construction of the model. Under the integrability condition on σ in (4.3.2) one has that

$$\frac{t}{1+t^2}, \frac{1}{1+t^2} \in L^2\_{d\sigma}(\mathbb{R}),\tag{4.3.3}$$

and, in addition,

$$f(t), \; tf(t) \in L^2\_{d\sigma}(\mathbb{R}) \quad \Rightarrow \quad f(t) \in L^1\_{d\sigma}(\mathbb{R}).\tag{4.3.4}$$

It is first assumed for convenience that the linear term in the integral representation (4.3.1) is absent, that is, β = 0. The general case β = 0 will be discussed afterwards in Theorem 4.3.4. The usual notation for general elements {f,f- } will also be used here; the reader should be aware that fis not the derivative here.

**Theorem 4.3.1.** Let M be a scalar Nevanlinna function of the form (4.3.1) with β = 0 and assume that M is uniformly strict, that is, M is not identically equal to a constant. Then

$$S = \left\{ \{f(t), tf(t)\} : f(t), tf(t) \in L^2\_{d\sigma}(\mathbb{R}), \int\_{\mathbb{R}} f(t) \, d\sigma(t) = 0 \right\} \tag{4.3.5}$$

is a closed simple symmetric operator in L<sup>2</sup> dσ(R) and its adjoint is given by

$$S^\* = \left\{ \left\{ f(t), f'(t) \right\} : f(t), f'(t) \in L^2\_{d\sigma}(\mathbb{R}), \; tf(t) - f'(t) = c \in \mathbb{C} \right\}.\tag{4.3.6}$$

Moreover, the mappings

$$
\Gamma\_0 \widehat{f} = c \quad \text{and} \quad \Gamma\_1 \widehat{f} = \alpha c + \int\_{\mathbb{R}} \frac{t f'(t) + f(t)}{1 + t^2} \, d\sigma(t), \quad \widehat{f} = \{f, f'\} \in S^\*, \tag{4.3.7}
$$

are well defined and {C, <sup>Γ</sup>0, <sup>Γ</sup>1} is a boundary triplet for <sup>S</sup>∗. The corresponding γ-field is given by the mapping

$$c \mapsto f\_{\lambda}(t) = \frac{c}{t - \lambda} \in \ker(S^\* - \lambda),\tag{4.3.8}$$

and the corresponding Weyl function is M.

Proof. The proof consists of two steps. In Step 1 it will be shown that S in (4.3.5) is a closed symmetric operator and that {C, <sup>Γ</sup>0, <sup>Γ</sup>1} is a boundary triplet for its adjoint S<sup>∗</sup> in (4.3.6). Moreover, it will be shown that the γ-field is given by (4.3.8) and that M is the corresponding Weyl function. In Step 2 the simplicity of S is concluded from Corollary A.1.5.

Step 1. The right-hand side of (4.3.6) is a relation which satisfies the conditions in Theorem 2.1.9. To see this, denote the relation on the right-hand side of (4.3.6) by T and think of Γ<sup>0</sup> and Γ<sup>1</sup> as being defined on T. Observe that the mapping Γ<sup>1</sup> is well defined thanks to (4.3.3). Likewise, the operator S is well defined due to (4.3.4).

First, it is clear that A<sup>0</sup> = ker Γ<sup>0</sup> ⊂ T is the maximal multiplication operator by the independent variable in L<sup>2</sup> dσ(R):

$$(A\_0 f)(t) = tf(t), \quad \text{dom}\, A\_0 = \left\{ f(t) \in L^2\_{d\sigma}(\mathbb{R}) : tf(t) \in L^2\_{d\sigma}(\mathbb{R}) \right\}.$$

Since A<sup>0</sup> is a self-adjoint operator in L<sup>2</sup> dσ(R), one sees that condition (i) in Theorem 2.1.9 is satisfied.

Next, one has that Γ is surjective. For this, note that, by (4.3.6), the defect subspace <sup>N</sup> <sup>λ</sup>(T), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, of <sup>T</sup> consists of elements of the form

$$\widehat{f}\lambda = \left\{ \left\{ \frac{c\_{\lambda}}{t - \lambda}, \frac{\lambda \, c\_{\lambda}}{t - \lambda} \right\} : c\_{\lambda} \in \mathbb{C} \right\},\tag{4.3.9}$$

which after a simple computation gives

$$
\Gamma\_0 \ddot{f}\_\lambda = c\_\lambda \quad \text{and} \quad \Gamma\_1 \ddot{f}\_\lambda = M(\lambda)c\_\lambda,
$$

or, in other words,

$$
\Gamma\_1 \widehat{f}\_\lambda = M(\lambda) \Gamma\_0 \widehat{f}\_\lambda. \tag{4.3.10}
$$

Using (4.3.10) one now observes that

$$\begin{aligned} \begin{pmatrix} \Gamma\_0(\widehat{f}\_\lambda c\_\lambda + \widehat{f}\_\overline{\lambda} c\_{\overline{\lambda}})\\ \Gamma\_1(\widehat{f}\_\lambda c\_\lambda + \widehat{f}\_{\overline{\lambda}} c\_{\overline{\lambda}}) \end{pmatrix} &= \begin{pmatrix} c\_\lambda + c\_{\overline{\lambda}} \\ M(\lambda)c\_\lambda + M(\overline{\lambda})c\_{\overline{\lambda}} \end{pmatrix} \\ &= \begin{pmatrix} 1 & 1 \\ M(\lambda) & M(\overline{\lambda}) \end{pmatrix} \begin{pmatrix} c\_\lambda \\ c\_{\overline{\lambda}} \end{pmatrix} .\end{aligned}$$

This shows that Γ is surjective; just note that <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> implies that Im <sup>M</sup>(λ) = 0. Hence, condition (ii) in Theorem 2.1.9 is satisfied.

Finally, the abstract Green identity for T and Γ in Theorem 2.1.9 (iii) will be exhibited. For this purpose, let f <sup>=</sup> {f,f- }, <sup>g</sup> <sup>=</sup> {g, g- } ∈ T, and assume that tf(t) − f- (t) = c and tg(t) − g- (t) = <sup>d</sup> for some c, d <sup>∈</sup> <sup>C</sup>. Then a calculation shows that

$$\begin{split} \Gamma\_{1}f \; \Gamma\_{0}\widehat{g} &- \Gamma\_{0}f \; \Gamma\_{1}\widehat{g} \\ &= \left(\alpha c + \int\_{\mathbb{R}} \frac{tf'(t) + f(t)}{1 + t^{2}} \, d\sigma(t)\right) \overline{d} - c \overline{\left(\alpha d + \int\_{\mathbb{R}} \frac{tg'(t) + g(t)}{1 + t^{2}} \, d\sigma(t)\right)} \\ &= \int\_{\mathbb{R}} \frac{tf'(t) + f(t)}{1 + t^{2}} \, \overline{d} \, d\sigma(t) - \int\_{\mathbb{R}} c \, \frac{t\overline{g'(t)} + \overline{g(t)}}{1 + t^{2}} \, d\sigma(t). \end{split}$$

The last line gives, after substitution of c and d,

$$\begin{split} &\int\_{\mathbb{R}} \frac{t f'(t) + f(t)}{1 + t^2} \left( t \overline{g(t)} - \overline{g'(t)} \right) d\sigma(t) - \int\_{\mathbb{R}} \left( t f(t) - f'(t) \right) \frac{t \overline{g'(t)} + \overline{g(t)}}{1 + t^2} d\sigma(t) \\ &= \int\_{\mathbb{R}} f'(t) \overline{g(t)} \, d\sigma(t) - \int\_{\mathbb{R}} f(t) \overline{g'(t)} \, d\sigma(t) \\ &= \langle f', g \rangle\_{L^2\_{d\sigma}(\mathbb{R})} - \langle f, g' \rangle\_{L^2\_{d\sigma}(\mathbb{R})}. \end{split}$$

Hence, also condition (iii) in Theorem 2.1.9 is satisfied.

Therefore, all conditions of Theorem 2.1.9 have been verified. As a consequence the relation ker Γ<sup>0</sup> ∩ ker Γ<sup>1</sup> is closed and symmetric. It coincides with S in (4.3.5), since for f <sup>∈</sup> <sup>T</sup> one has

$$
\Gamma\_0 \widehat{f} = \Gamma\_1 \widehat{f} = 0 \quad \text{if and only if} \quad f'(t) = tf(t) \text{ and } \int\_{\mathbb{R}} f(t) \, d\sigma(t) = 0.
$$

Thus, it follows from Theorem 2.1.9 that the adjoint of the closed symmetric operator S in (4.3.5) is given by T and hence has the form (4.3.6), and, moreover, {C, <sup>Γ</sup>0, <sup>Γ</sup>1} is a boundary triplet for <sup>S</sup>∗. As a byproduct of (4.3.9) and (4.3.10) one sees that the corresponding γ-field is given by (4.3.8) and that the corresponding Weyl function coincides with M.

Step 2. It remains to show that the operator S in (4.3.5) is simple. To see this, assume that there is an element <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dσ(R) which is orthogonal to all elements <sup>f</sup><sup>λ</sup> <sup>∈</sup> ker (S<sup>∗</sup> <sup>−</sup> <sup>λ</sup>), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, that is,

$$\int\_{\mathbb{R}} \frac{1}{t - \lambda} \, g(t) \, d\sigma(t) = 0, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Then g = 0 in L<sup>2</sup> dσ(R) by Corollary A.1.5. Thus, the linear span of the defect spaces ker (S∗−λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is dense in <sup>L</sup><sup>2</sup> dσ(R) and now Corollary 3.4.5 implies that the symmetric operator S is simple. This completes the proof of Theorem 4.3.1. -

Note that in the model in Theorem 4.3.1 the self-adjoint extension A<sup>0</sup> is equal to the operator of multiplication by the independent variable. The closed minimal operator S is not densely defined if and only if the constant functions belong to L<sup>2</sup> dσ(R) or, equivalently, σ is a finite measure.

According to Theorem 4.2.6 the L<sup>2</sup> dσ(R)-space model for the function M and the model in Theorem 4.2.4 are unitarily equivalent thanks to the simplicity of the underlying symmetric operators. A concrete unitary map will be provided in the following proposition.

**Proposition 4.3.2.** Let M be the Nevanlinna function in (4.3.1) with β = 0, and let H(NM) be the associated reproducing kernel Hilbert space. Then the operator V : L<sup>2</sup> dσ(R) <sup>→</sup> <sup>H</sup>(NM) defined by the rule

$$f \mapsto -\int\_{\mathbb{R}} \frac{1}{t - \xi} f(t) \, d\sigma(t), \quad \xi \in \mathbb{C} \backslash \mathbb{R}, \tag{4.3.11}$$

is unitary. Moreover, under this mapping the boundary triplets in Theorem 4.3.1 and Theorem 4.2.4 are unitarily equivalent.

Proof. It suffices to show that the operator in (4.3.11) satisfies (4.2.25); cf. Theorem 4.2.6. In fact, recall that the γ-field corresponding to the boundary triplet in Theorem 4.2.4 at a point <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> is given by the mapping

$$c \mapsto -c \, \mathsf{N}\_M(\cdot, \bar{\lambda}) \in \ker \left( (S\_M)^\* - \lambda \right). \tag{4.3.12}$$

The γ-field corresponding to the boundary triplet in Theorem 4.3.1 at a point <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> is given by the mapping

$$c \mapsto f\_{\lambda}(t) = \frac{c}{t - \lambda} \in \ker\left(S^\* - \lambda\right). \tag{4.3.13}$$

It follows from the integral representation (4.3.1) with β = 0 that

$$\mathsf{N}\_M(\xi,\overline{\lambda}) = \frac{M(\xi) - M(\lambda)}{\xi - \lambda} = \int\_{\mathbb{R}} \frac{1}{t - \xi} \frac{1}{t - \lambda} \, d\sigma(t).$$

In view of (4.3.11) and (4.3.13), it follows that

$$(Vf\_\lambda)(\xi) = -\int\_{\mathbb{R}} \frac{1}{t - \xi} \frac{c}{t - \lambda} \, d\sigma(t) = -c \, \mathbb{N}\_M(\xi, \overline{\lambda}),$$

and taking into account (4.3.12) one concludes that (4.2.25) is satisfied. Hence, Theorem 4.2.6 ensures that the operator V in (4.3.11) is well defined and unitary, and the boundary triplets in Theorem 4.3.1 and Theorem 4.2.4 are unitarily equivalent. -

The special case of a rational Nevanlinna function serves as an illustration of Theorem 4.3.1. In this situation the measure dσ in (4.3.1) has only finitely many point masses and the space L<sup>2</sup> dσ(R) can be identified with Cn.

**Example 4.3.3.** Let <sup>α</sup><sup>1</sup> <sup>∈</sup> <sup>R</sup>, <sup>n</sup> <sup>∈</sup> <sup>N</sup>, <sup>γ</sup>1,...,γ<sup>n</sup> <sup>&</sup>gt; 0, and

−∞ < δ<sup>1</sup> < δ<sup>2</sup> < ··· < δ<sup>n</sup> < ∞,

and consider the rational complex Nevanlinna function

$$N(\lambda) = \alpha\_1 + \sum\_{i=1}^n \frac{\gamma\_i}{\delta\_i - \lambda}, \qquad \lambda \neq \delta\_i, \, i = 1, \ldots, n. \tag{4.3.14}$$

Define a nondecreasing step function <sup>σ</sup> : <sup>R</sup> <sup>→</sup> [0, <sup>∞</sup>) by

$$
\sigma(t) = \begin{cases}
0, & t \in (-\infty, \delta\_1], \\
\gamma\_1, & t \in (\delta\_1, \delta\_2], \\
\gamma\_1 + \gamma\_2, & t \in (\delta\_2, \delta\_3], \\
\dots \\
\gamma\_1 + \gamma\_2 + \dots + \gamma\_n, & t \in (\delta\_n, \infty),
\end{cases}
$$

and consider the corresponding L2-space L<sup>2</sup> dσ(R) with the scalar product

$$(f,g)\_{L^2\_{d\sigma}(\mathbb{R})} = \int\_{\mathbb{R}} f(t)\overline{g(t)} \, d\sigma(t) = \sum\_{i=1}^n \gamma\_i \, f(\delta\_i) \overline{g(\delta\_i)}.$$

The rational Nevanlinna function N in (4.3.14) admits the integral representation

$$N(\lambda) = \alpha + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{1 + t^2} \right) \, d\sigma(t)$$

as in (4.3.1), where

$$\alpha := \alpha\_1 + \int\_{\mathbb{R}} \frac{t}{1 + t^2} \, d\sigma(t) = \alpha\_1 + \sum\_{i=1}^n \gamma\_i \, \frac{\delta\_i}{1 + \delta\_i^2}.$$

The Hilbert space L<sup>2</sup> dσ(R) can be identified with (Cn,(·, ·)γ), where

$$(x, y)\_{\gamma} := \sum\_{i=1}^{n} \gamma\_i \, x\_i \overline{y}\_i, \quad x = (x\_1, \dots, x\_n)^\top, \\ y = (y\_1, \dots, y\_n)^\top \in \mathbb{C}^n,$$

via the unitary mapping

$$L^2\_{d\sigma}(\mathbb{R}) \ni f \mapsto \begin{pmatrix} f(\delta\_1) \\ \vdots \\ f(\delta\_n) \end{pmatrix},$$

and the maximal operator of multiplication by the independent variable in L<sup>2</sup> dσ(R) is unitarily equivalent to the diagonal matrix

$$
\begin{pmatrix}
\delta\_1 & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & \delta\_n
\end{pmatrix}.
\tag{4.3.15}
$$

As

$$\int\_{\mathbb{R}} f(t) \, d\sigma(t) = \sum\_{i=1}^{n} \gamma\_i f(\delta\_i) = \left( \begin{pmatrix} f(\delta\_1) \\ \vdots \\ f(\delta\_n) \end{pmatrix}, \begin{pmatrix} 1 \\ \vdots \\ 1 \end{pmatrix} \right)\_{\gamma},$$

the simple symmetric operator S in Theorem 4.3.1 is unitarily equivalent to the restriction of the diagonal matrix in (4.3.15) to the orthogonal complement of the subspace span (1,..., 1). Furthermore, S<sup>∗</sup> corresponds to the relation

$$\left\{ \{ f, \tilde{f} \} \in \mathbb{C}^n \times \mathbb{C}^n : \delta\_i f(\delta\_i) - f'(\delta\_i) = c \in \mathbb{C}, \, i = 1, \dots, n \right\}$$

and the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} in Theorem 4.3.1 is of the form

$$
\Gamma\_0 \widehat{f} = c, \quad \Gamma\_1 \widehat{f} = \alpha c + \sum\_{i=1}^n \frac{\delta\_i f'(\delta\_i) + f(\delta\_i)}{1 + \delta\_i^2}, \quad \widehat{f} = \{f, f'\} \in S^\*.
$$

According to Theorem 4.3.1, the corresponding Weyl function is the rational Nevanlinna function N in (4.3.14).

Now the general case of a scalar Nevanlinna function of the form (4.3.1) with β > 0 will be addressed.

**Theorem 4.3.4.** Let M be a scalar Nevanlinna function of the form (4.3.1) with β > 0. Then

$$S = \left\{ \left\{ \begin{pmatrix} f(t) \\ 0 \end{pmatrix}, \begin{pmatrix} tf(t) \\ h' \end{pmatrix} \right\} : f(t), tf(t) \in L^2\_{d\sigma}(\mathbb{R}), \int\_{\mathbb{R}} f(t) \, d\sigma(t) = -\beta^{1/2} h' \right\}.$$

is a closed simple symmetric operator in L<sup>2</sup> dσ(R) <sup>⊕</sup> <sup>C</sup> and its adjoint is given by

$$S^\* = \left\{ \left\{ \begin{pmatrix} f(t) \\ h \end{pmatrix}, \begin{pmatrix} f'(t) \\ h' \end{pmatrix} \right\} : \begin{array}{l} f(t), f'(t) \in L^2\_{d\sigma}(\mathbb{R}), h, h' \in \mathbb{C}, \\ t f(t) - f'(t) = \beta^{-1/2} h \end{array} \right\}.$$

Moreover, for f <sup>∈</sup> <sup>S</sup><sup>∗</sup> the mappings

$$
\Gamma\_0 \widehat{f} = \beta^{-1/2} h \quad \text{and} \quad \Gamma\_1 \widehat{f} = \alpha \beta^{-1/2} h + \int\_{\mathbb{R}} \frac{t f'(t) + f(t)}{1 + t^2} \, d\sigma(t) + \beta^{1/2} h'
$$

are well defined and {C, <sup>Γ</sup>0, <sup>Γ</sup>1} is a boundary triplet for <sup>S</sup>∗. The corresponding γ-field is given by the mapping

$$c \mapsto f\_{\lambda}(t) = \begin{pmatrix} c \\ \overline{t - \lambda} \\ \beta^{1/2} c \end{pmatrix} \in \ker(S^\* - \lambda) \tag{4.3.16}$$

and the corresponding Weyl function is given by M.

Proof. The proof is similar to the one of Theorem 4.3.1, thus only a brief sketch will be given. Denote the right-hand side of the formula for S<sup>∗</sup> by T and think of Γ<sup>0</sup> and Γ<sup>1</sup> as being defined on T. It is clear that A<sup>0</sup> = ker Γ<sup>0</sup> ⊂ T is the orthogonal componentwise sum of the maximal multiplication operator by the independent variable in L<sup>2</sup> dσ(R) and the purely multivalued part {0} × <sup>C</sup>. Hence, <sup>A</sup><sup>0</sup> is a selfadjoint relation. To show that Γ is surjective, note that the defect subspace <sup>N</sup> <sup>λ</sup>(T), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, of <sup>T</sup> consists of elements of the form

$$\widehat{f}\_{\lambda} = \left\{ \begin{pmatrix} \frac{c\_{\lambda}}{t-\lambda} \\ \beta^{1/2}c\_{\lambda} \end{pmatrix}, \begin{pmatrix} \frac{\lambda c\_{\lambda}}{t-\lambda} \\ \lambda \beta^{1/2}c\_{\lambda} \end{pmatrix} \right\} \in \widehat{\mathfrak{N}}\_{\lambda}(S^\*),$$

which gives

$$
\Gamma\_0 \dot{f}\_\lambda = c\_\lambda \quad \text{and} \quad \Gamma\_1 \dot{f}\_\lambda = M(\lambda) c\_\lambda.
$$

Again, as in the proof of Theorem 4.3.1 it follows that Γ is surjective. It can be checked by straightforward calculation as in the proof of Theorem 4.3.1 that the abstract Green identity is satisfied. Thus, by Theorem 2.1.9, one concludes that T is the adjoint of the closed symmetric relation S and that Γ<sup>0</sup> and Γ<sup>1</sup> define a boundary triplet for S∗. Hence, the statements about the γ-field and the Weyl function follow.

To show the simplicity of S, assume that there is an element orthogonal to <sup>N</sup>λ(S∗) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, i.e., there exists an element <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dσ(R) and a constant <sup>γ</sup> <sup>∈</sup> <sup>C</sup> such that

$$\int\_{\mathbb{R}} \frac{1}{t - \lambda} \, g(t) \, d\sigma(t) = \gamma, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

For λ = iy and y → ∞ it follows that γ = 0 and hence Corollary A.1.5 implies <sup>g</sup> = 0. Therefore, the closed linear span of all <sup>N</sup>λ(S∗), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is equal to L2 dσ(R)⊕C, and, as a consequence, the closed symmetric operator <sup>S</sup> is simple. -

In the situation of Theorem 4.3.4 one sees that S is a nondensely defined operator and that mul S<sup>∗</sup> is spanned by the vector

$$\begin{pmatrix} 0\\1 \end{pmatrix} \in L^2\_{d\sigma}(\mathbb{R}) \oplus \mathbb{C}. \tag{4.3.17}$$

Now A<sup>0</sup> is the only self-adjoint extension of S which is multivalued: it is the orthogonal sum of multiplication by the independent variable in L<sup>2</sup> dσ and the space spanned by (4.3.17).

**Proposition 4.3.5.** Let M be the Nevanlinna function in (4.3.1) with β > 0 and let H(NM) be the associated reproducing kernel Hilbert space. Then the operator V : L<sup>2</sup> dσ(R) <sup>⊕</sup> <sup>C</sup> <sup>→</sup> <sup>H</sup>(NM) given by the rule

$$\begin{pmatrix} f \\ h \end{pmatrix} \mapsto -\beta^{1/2}h - \int\_{\mathbb{R}} \frac{1}{t-\xi} f(t) \, d\sigma(t)$$

is unitary. Moreover, under this mapping the boundary triplets in Theorem 4.3.4 and Theorem 4.2.4 are unitarily equivalent.

Proof. The proof is similar to the one of Proposition 4.3.2 and will be sketched briefly. Recall that the γ-field corresponding to the boundary triplet in Theorem 4.2.4 at a point <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> is given by the mapping

$$c \mapsto -c \, \mathsf{N}\_M(\cdot, \overline{\lambda}) \in \ker \left( (S\_M)^\* - \lambda \right), \tag{4.3.18}$$

while the γ-field corresponding to the boundary triplet in Theorem 4.3.4 at a point <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> is given by the mapping

$$c \mapsto f\_{\lambda}(t) = \begin{pmatrix} c \\ \overline{t - \lambda} \\ \beta^{1/2} c \end{pmatrix} \in \ker \left( S^\* - \lambda \right). \tag{4.3.19}$$

It follows from the integral representation (4.3.1) that

$$\mathsf{N}\_{M}(\xi,\overline{\lambda}) = \frac{M(\xi) - M(\lambda)}{\xi - \lambda} = \beta + \int\_{\mathbb{R}} \frac{1}{t - \xi} \frac{1}{t - \lambda} \, d\sigma(t).$$

Hence, (4.3.18)–(4.3.19) and the fact that

$$(Vf\_\lambda)(\xi) = -c\beta - \int\_{\mathbb{R}} \frac{1}{t - \xi} \frac{c}{t - \lambda} \, d\sigma(t) = -c \, \mathsf{N}\_M(\xi, \overline{\lambda}),$$

imply that the property (4.2.25) holds. This implies that the boundary triplets in Theorem 4.3.4 and Theorem 4.2.4 are unitarily equivalent. -

In the following it is briefly explained how the self-adjoint multiplication operator in L<sup>2</sup> dσ(R) and the model discussed in this section (in the case β = 0) are connected with the spectral theory and the limit properties of the Weyl function in Chapter 3. For this assume that <sup>σ</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> is a nondecreasing function such that

$$\int\_{\mathbb{R}} \frac{1}{1+t^2} \, d\sigma(t) < \infty$$

and consider the self-adjoint multiplication operator

$$(A\_0 f)(t) = t f(t), \quad \text{dom}\, A\_0 = \left\{ f \in L^2\_{d\sigma}(\mathbb{R}) : t \mapsto t f(t) \in L^2\_{d\sigma}(\mathbb{R}) \right\}.$$

in L<sup>2</sup> dσ(R). Then it is known from Example 3.3.7 that the spectrum σ(A0) coincides with the set of growth points of the function σ, see (3.2.1), and the same is true for the absolutely continuous part σac, singular continuous part σsc, and singular part σ<sup>s</sup> of σ. On the other hand, the one-dimensional restriction

$$S = \left\{ \left\{ f(t), tf(t) \right\} : f(t), tf(t) \in L^2\_{d\sigma}(\mathbb{R}), \int\_{\mathbb{R}} f(t) \, d\sigma(t) = 0 \right\}.$$

of A<sup>0</sup> in Theorem 4.3.1 is a closed simple symmetric operator in L<sup>2</sup> dσ(R) and {C, <sup>Γ</sup>0, <sup>Γ</sup>1} in (4.3.7) is a boundary triplet for <sup>S</sup><sup>∗</sup> in (4.3.6) with <sup>A</sup><sup>0</sup> = ker Γ<sup>0</sup> and corresponding Weyl function

$$M(\lambda) = \alpha + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{1 + t^2} \right) d\sigma(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \tag{4.3.20}$$

where α is an arbitrary real number in the definition of the boundary map Γ<sup>1</sup> in (4.3.7). Hence, the results on the description of the spectrum of A<sup>0</sup> via the limit properties of the Weyl function from Section 3.5 and Section 3.6 apply in the present situation. For example, Theorem 3.6.5 shows that

$$\sigma\_{\text{ac}}(A\_0) = \text{clos}\_{\text{ac}}\left(\left\{ x \in \mathbb{R} : 0 < \text{Im}\, M(x + i0) < \infty \right\} \right),$$

which is also clear from Theorem 3.2.6 (i), taking into account (3.1.25) and Corollary 3.1.8 (ii). Similar observations can be made for the other spectral subsets. In other words, in the special situation where A<sup>0</sup> is the self-adjoint multiplication operator in L<sup>2</sup> dσ(R) the general description of the spectrum of A<sup>0</sup> and its subsets in Chapter 3 in terms of the limit properties of the associated Weyl function in (4.3.20) agrees with Example 3.3.7.

## **4.4 Realization of Nevanlinna pairs and generalized resolvents**

In this section the model from Section 4.2 for Nevanlinna functions will be extended to the general setting of Nevanlinna pairs and of generalized resolvents. As a byproduct the extended model leads to the Sz.-Nagy dilation theorem.

Let G be a Hilbert space and let {A, B} be a Nevanlinna pair of **B**(G)-valued functions; cf. Section 1.12. The associated Nevanlinna kernel NA,B

$$\mathbb{N}\_{A,B}(\cdot,\cdot): \Omega \times \Omega \to \mathbb{B}(\mathcal{G})$$

is defined on Ω = <sup>C</sup> \ <sup>R</sup> by

$$\mathcal{N}\_{A,B}(\lambda,\mu) = \frac{B(\overline{\lambda})^\* A(\overline{\mu}) - A(\overline{\lambda})^\* B(\overline{\mu})}{\lambda - \overline{\mu}}, \quad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}, \quad \lambda \neq \overline{\mu}, \tag{4.4.1}$$

and NA,B(λ, λ) = B- (λ)∗A(λ) − A- (λ)∗B(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then clearly the kernel NA,B is symmetric. Recall that λ → A(λ) and λ → B(λ) are holomorphic mappings on <sup>C</sup> \ <sup>R</sup>. Hence,

$$
\lambda \mapsto \mathsf{N}\_{A,B}(\lambda, \mu).
$$

is holomorphic for each <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, that is, the kernel <sup>N</sup>A,B is holomorphic. Moreover, it follows from (4.4.1) and Definition 1.12.3 that

$$\mathsf{N}\_{A,B}(\lambda,\lambda) = \frac{\mathrm{Im}\left(A(\overline{\lambda})^\* B(\overline{\lambda})\right)}{\mathrm{Im}\,\overline{\lambda}} \ge 0, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

In the next theorem it is shown that the kernel NA,B is, in fact, nonnegative on <sup>C</sup> \ <sup>R</sup>. Note also that the kernel <sup>N</sup>A,B is uniformly bounded on compact subsets of <sup>C</sup> \ <sup>R</sup> since

$$\left| \left| \mathsf{N}\_{A,B}(\lambda,\lambda) \right| \right| \le \frac{\left| \left| A(\overline{\lambda})^\* \right| \left| \left| B(\overline{\lambda})^\* \right| \right|}{\left| \mathrm{Im}\,\lambda \right|}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

**Theorem 4.4.1.** Let {A, B} be a Nevanlinna pair in G. Then the kernel NA,B is nonnegative.

Proof. To see this, let N be a uniformly strict **B**(G)-valued Nevanlinna function and let ε > 0. Then εN is again a uniformly strict Nevanlinna function. Define the function S<sup>ε</sup> by

$$S\_{\varepsilon}(\lambda) = -A(\lambda) \left( \varepsilon N(\lambda) A(\lambda) + B(\lambda) \right)^{-1}, \quad \lambda \in \mathbb{C} \text{ } \mathbb{R}.$$

By Proposition 1.12.6, S<sup>ε</sup> is a Nevanlinna function. A calculation shows that the Nevanlinna kernel associated with the function S<sup>ε</sup> is of the form

$$\begin{split} \mathsf{N}\_{S\_{\varepsilon}}(\lambda,\mu) &= \left(\varepsilon N(\overline{\lambda})A(\overline{\lambda}) + B(\overline{\lambda})\right)^{-\ast} \cdot \\ &\quad \cdot \left[\mathsf{N}\_{A,B}(\lambda,\mu) + \varepsilon A(\overline{\lambda})^{\ast}\mathsf{N}\_{N}(\lambda,\mu)A(\overline{\mu})\right] \left(\varepsilon N(\overline{\mu})A(\overline{\mu}) + B(\overline{\mu})\right)^{-1} .\end{split} \tag{4.4.2}$$

Observe that for any ε > 0 the kernel N<sup>S</sup><sup>ε</sup> is nonnegative since S<sup>ε</sup> is a Nevanlinna function. The identity (4.4.2) shows that the kernel

$$\mathsf{N}\_{A,B}(\lambda,\mu) + \varepsilon A(\lambda)^{\*}\mathsf{N}\_{N}(\lambda,\mu)A(\overline{\mu}) \tag{4.4.3}$$

is nonnegative for any ε > 0.

To show that the kernel NA,B is nonnegative, assume the contrary, i.e., assume that NA,B is not nonnegative. Then it follows from the definition of nonnegativity that there exist <sup>n</sup> <sup>∈</sup> <sup>N</sup>, <sup>λ</sup>1,...,λ<sup>n</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, elements <sup>ϕ</sup>1,...,ϕ<sup>n</sup> <sup>∈</sup> <sup>G</sup>, and a vector <sup>c</sup> <sup>∈</sup> <sup>C</sup>n, such that

$$\left( \left( (\mathsf{N}\_{A,B}(\lambda\_i, \lambda\_j)\varphi\_j, \varphi\_i)\_{\mathfrak{G}} \right)\_{i,j=1}^n c, c \right) = x < 0.$$

Since −x > 0 and the kernel N<sup>N</sup> is nonnegative, one can choose ε > 0 so small that

$$0 \le \varepsilon \left( \left( (A(\overline{\lambda}\_i)^\* \mathbb{N}\_N(\lambda\_i, \lambda\_j) A(\overline{\lambda}\_j) \varphi\_j, \varphi\_i)\_{\mathbb{S}} \right)\_{i,j=1}^n c, c \right) < -x.$$

Combining these results one arrives at the inequality

$$\left( \left( \left( (\mathbb{N}\_{A,B}(\lambda\_i, \lambda\_j) + \varepsilon A(\overline{\lambda}\_i)^\* \mathbb{N}\_N(\lambda\_i, \lambda\_j) A(\overline{\lambda}\_j) \right) \varphi\_j, \varphi\_i \right)\_{\mathbb{S}} \right)\_{i,j=1}^n c, c \right) < 0, 1$$

which contradicts the nonnegativity of the kernel in (4.4.3). Thus, the kernel NA,B is nonnegative. -

Let {A, B} be a Nevanlinna pair in G. According to Theorem 4.1.5, with the nonnegative kernel NA,B there is associated a Hilbert space of holomorphic G-valued functions, which will be denoted by H(NA,B), with inner product ·, ·; cf. Section 4.1. Recall that the reproducing kernel property

$$\langle f, \mathsf{N}\_{A,B}(\cdot,\mu)\varphi\rangle = (f(\mu), \varphi)\_{\mathfrak{S}}, \qquad \varphi \in \mathfrak{G}, \ \mu \in \mathbb{C} \ \backslash \mathbb{R},$$

holds for all functions f ∈ H(NA,B). The following realization result extends Theorem 4.2.2 to the case of Nevanlinna pairs. It follows from Theorem 4.2.3 that this construction is unique up to unitary equivalence. Note in this context that a Nevanlinna function M always gives rise to a Nevanlinna pair {I,M}.

**Theorem 4.4.2.** Let {A, B} be a Nevanlinna pair in G and let τ = {A, B} be the corresponding Nevanlinna family. Let H(NA,B) be the associated reproducing kernel Hilbert space generated by {A, B}. Denote by P<sup>G</sup> the orthogonal projection from H(NM) ⊕ G onto G and let ι<sup>G</sup> be the canonical embedding of G into H(NM) ⊕ G. Then

$$\tilde{A}\_{A,B} = \left\{ \left\{ \begin{pmatrix} f \\ \varphi \end{pmatrix}, \begin{pmatrix} f' \\ -\varphi' \end{pmatrix} \right\} : \begin{matrix} f, f' \in \mathfrak{H}(\mathbb{N}\_{A,B})\_{\sharp} \varphi, \varphi' \in \mathfrak{G}, \\ f'(\xi) - \xi f(\xi) = B(\xi)^{\*} \varphi - A(\overline{\xi})^{\*} \varphi' \end{matrix} \right\}$$

is a self-adjoint relation in the Hilbert space H(NA,B) ⊕ G and the compressed resolvent of <sup>A</sup>A,B onto <sup>G</sup> is given by

$$P\_{\mathbb{S}}(\tilde{A}\_{A,B} - \lambda)^{-1} \iota\_{\mathbb{S}} = -(\tau(\lambda) + \lambda)^{-1}, \qquad \lambda \in \mathbb{C} \; \backslash \; \mathbb{R}. \tag{4.4.4}$$

Furthermore, the self-adjoint relation <sup>A</sup>A,B satisfies the following minimality condition:

$$\mathfrak{H}(\mathsf{N}\_{A,B}) \oplus \mathfrak{G} = \overline{\operatorname{span}} \left\{ \mathfrak{G}, \operatorname{ran} \left( \tilde{A}\_{A,B} - \lambda \right)^{-1} \iota\_{\mathfrak{G}} : \lambda \in \mathbb{C} \mid \mathbb{R} \right\}.\tag{4.4.5}$$

Proof. The proof is almost the same as the proof of Theorem 4.2.2; therefore, only the main elements are recalled and the details are left to the reader.

Step 1. Use the Nevanlinna pair {A, B} to define the auxiliary relation B in H(NA,B) ⊕ G by

$$B = \text{span}\left\{ \left\{ \begin{pmatrix} \mathsf{N}\_{A,B}(\cdot,\overline{\mu})\varphi\\ -A(\mu)\varphi \end{pmatrix}, \begin{pmatrix} \mu\mathsf{N}\_{A,B}(\cdot,\overline{\mu})\varphi\\ B(\mu)\varphi \end{pmatrix} \right\} : \varphi \in \mathfrak{G}, \,\mu \in \mathbb{C} \,\backslash \mathbb{R} \right\}.$$

It is a direct computation to show that <sup>B</sup> <sup>⊂</sup> <sup>A</sup>A,B. Likewise, by a similar computation one verifies that <sup>B</sup> is symmetric in <sup>H</sup>(NA,B) <sup>⊕</sup> <sup>G</sup>. Observe that for <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one has

$$\text{ran}\,(B-\lambda\_0) = \text{span}\left\{ \begin{pmatrix} (\mu-\lambda\_0)\mathbb{N}\_{A,B}(\cdot,\overline{\mu})\varphi\\ (B(\mu)+\lambda\_0 A(\mu))\varphi \end{pmatrix} : \varphi \in \mathcal{G}, \,\mu \in \mathbb{C} \,\backslash \mathbb{R} \right\}.$$

Therefore, choosing μ = λ<sup>0</sup> and taking into account that

$$\operatorname{ran}\left(B(\lambda\_0) + \lambda\_0 A(\lambda\_0)\right) = \mathcal{G}$$

by Definition 1.12.3 and Lemma 1.12.5 it follows that {0} ⊕ G ⊂ ran (B − λ0); hence also the elements of the form

$$\left(\begin{array}{c} \mathsf{N}\_{A,B}(\cdot,\overline{\mu})\varphi\\0 \end{array}\right), \quad \varphi \in \mathfrak{G}, \quad \mu \in \mathbb{C} \backslash \mathbb{R}, \quad \mu \neq \lambda\_0,$$

belong to ran (B − λ0). It follows from Corollary 4.1.7 that ran (B − λ0) is dense in H(NA,B) ⊕ G, and thus B is essentially self-adjoint.

Step 2. One verifies in the same way as in the proof of Theorem 4.2.2 that <sup>A</sup>A,B is closed and that <sup>A</sup>A,B <sup>⊂</sup> <sup>B</sup>∗. Since <sup>B</sup> <sup>⊂</sup> <sup>A</sup>A,B and <sup>B</sup> is self-adjoint, it follows that <sup>A</sup>A,B is self-adjoint in <sup>H</sup>(NA,B) <sup>⊕</sup> <sup>G</sup>.

Step 3. The statement (4.4.4) follows in a similar way as in Theorem 4.2.2. In fact, observe that (AA,B <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> consists of all the elements

$$\left\{ \begin{pmatrix} f' - \lambda f \\ -\varphi' - \lambda \varphi \end{pmatrix}, \begin{pmatrix} f \\ \varphi \end{pmatrix} \right\}, \qquad f, f' \in \mathfrak{H}(\mathbb{N}\_{A,B}), \quad \varphi, \varphi' \in \mathfrak{G},$$

for which

$$f'(\xi) - \xi f(\xi) = B(\overline{\xi})^\* \varphi - A(\overline{\xi})^\* \varphi', \qquad \xi \in \mathbb{C} \backslash \mathbb{R}.\tag{4.4.6}$$

Hence, the compression <sup>P</sup>G(AA,B <sup>−</sup> <sup>λ</sup>)−1ι<sup>G</sup> is formed by the pairs

$$\{ -\varphi' - \lambda \varphi, \varphi \}, \qquad \varphi, \varphi' \in \mathfrak{G}, \tag{4.4.7}$$

which satisfy (4.4.6) for some f,f- ∈ H(NA,B) and, in addition, f- (ξ) = λf(ξ) for <sup>ξ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. This implies that (4.4.6) becomes

$$(\lambda - \xi)f(\xi) = B(\overline{\xi})^\* \varphi - A(\overline{\xi})^\* \varphi', \qquad \xi \in \mathbb{C} \ \mathbb{R},$$

and the choice ξ = λ gives

$$B(\overline{\lambda})^\* \varphi = A(\overline{\lambda})^\* \varphi'. \tag{4.4.8}$$

On the other hand, as τ (λ) = {{A(λ)ψ, B(λ)ψ} : ψ ∈ G}, one has by the symmetry property of the Nevanlinna family τ and (1.10.3) that

$$\tau(\lambda) = \tau(\overline{\lambda})^\* = \left\{ \{ \psi, \psi' \} : B(\overline{\lambda})^\* \psi = A(\overline{\lambda})^\* \psi' \right\},$$

and since the pair {ϕ, ϕ- } in (4.4.7) satisfies (4.4.8), it follows that {ϕ, ϕ- } ∈ τ (λ). Hence, {−ϕ-<sup>−</sup> λϕ, ϕ}∈−(<sup>τ</sup> (λ) + <sup>λ</sup>)−1, which yields the inclusion

$$P\_{\mathfrak{G}}(\widetilde{A}\_{A,B} - \lambda)^{-1} \iota\_{\mathfrak{G}} \subset -(\tau(\lambda) + \lambda)^{-1}.$$

Since the compressed resolvent of <sup>A</sup>A,B and <sup>−</sup>(<sup>τ</sup> (λ) + <sup>λ</sup>)−<sup>1</sup> are both everywhere defined and bounded operators (4.4.4) follows.

Finally, the minimality condition (4.4.5) is shown in the same way as in the proof of Theorem 4.2.2. -

Theorem 4.4.2 provides a representation of the resolvent of the Nevanlinna family τ in terms of the model for the Nevanlinna pair {A, B}. Now let {C, D} be a Nevanlinna pair which is equivalent to {A, B}:

$$C(\lambda) = A(\lambda)X(\lambda) \quad \text{and} \quad D(\lambda) = B(\lambda)X(\lambda), \tag{4.4.9}$$

where <sup>X</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, is a bounded and boundedly invertible holomorphic operator function in **B**(H); cf. Section 1.12. Then the kernels NA,B and NC,D of the Nevanlinna pairs in (4.4.9) are connected by

$$\mathbb{N}\_{C,D}(\lambda,\mu) = X(\bar{\lambda})^\* \mathbb{N}\_{A,B}(\lambda,\mu) X(\overline{\mu}). \tag{4.4.10}$$

The following special case is of interest.

**Lemma 4.4.3.** Let {A, B} be a Nevanlinna pair in H and consider the bounded and boundedly invertible holomorphic operator function X(λ)=(B(λ) + λA(λ))−1, <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then the Nevanlinna pair {C, D} in (4.4.9) satisfies

$$C(\lambda)^\* = C(\overline{\lambda}), \quad D(\lambda)^\* = D(\overline{\lambda}), \quad \text{and} \quad D(\lambda) + \lambda C(\lambda) = I,$$

and for λ, μ <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the corresponding Nevanlinna kernel can be written as

$$\mathsf{N}\_{C,D}(\lambda,\mu) = \frac{D(\lambda)C(\mu)^{\*} - C(\lambda)D(\mu)^{\*}}{\lambda - \overline{\mu}} = \frac{C(\mu)^{\*} - C(\lambda)}{\lambda - \overline{\mu}} - C(\lambda)C(\mu)^{\*}.$$

Let again {A, B} be a Nevanlinna pair, let τ be the corresponding Nevanlinna family, and consider a Nevanlinna pair {C, D} which is equivalent to {A, B} via (4.4.9), so that it generates the same Nevanlinna family τ . Then according to Theorem 4.4.2,

$$\tilde{A}\_{C,D} = \left\{ \left\{ \begin{pmatrix} F \\ \psi \end{pmatrix}, \begin{pmatrix} F' \\ -\psi' \end{pmatrix} \right\} : \begin{matrix} F, F' \in \mathfrak{J}(\mathbb{N}\_{C,D}), \psi, \psi' \in \mathfrak{G}, \\ F'(\xi) - \xi F(\xi) = D(\overline{\xi})^\* \psi - C(\overline{\xi})^\* \psi' \end{matrix} \right\} \tag{4.4.11}$$

is a self-adjoint relation in the Hilbert space H(NC,D) ⊕ G and the compressed resolvent of <sup>A</sup>C,D onto <sup>G</sup> is given by

$$P\_{\mathbb{S}}(\tilde{A}\_{C,D} - \lambda)^{-1} \iota\_{\mathbb{S}} = -(\tau(\lambda) + \lambda)^{-1}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Furthermore, the self-adjoint relation <sup>A</sup>C,D satisfies the following minimality condition:

$$\mathfrak{H}(\mathsf{N}\_{C,D}) \oplus \mathcal{G} = \overline{\operatorname{span}} \left\{ \mathcal{G}, \operatorname{ran} \left( \tilde{A}\_{C,D} - \lambda \right)^{-1} \iota\_{\mathfrak{G}} \, : \, \lambda \in \mathbb{C} \, \middle\| \, \mathbb{R} \right\}.$$

The explicit connection between the various models involving these kernels in Theorem 4.4.2 now depends on Proposition 4.1.9. The corresponding self-adjoint relations are then unitarily equivalent in the sense of Definition 1.3.7 and Lemma 1.3.8.

**Lemma 4.4.4.** Let the Nevanlinna pairs {A, B} and {C, D} be equivalent in the sense of (4.4.9). Then the mapping U defined by

$$U: \begin{pmatrix} \mathfrak{H}(\mathbb{N}\_{A,B}) \\ \mathcal{G} \end{pmatrix} \to \begin{pmatrix} \mathfrak{H}(\mathbb{N}\_{C,D}) \\ \mathcal{G} \end{pmatrix}, \qquad \begin{pmatrix} f(\xi) \\ \varphi \end{pmatrix} \mapsto \begin{pmatrix} X(\overline{\xi})^\* f(\xi) \\ \varphi \end{pmatrix}, \tag{4.4.12}$$

is unitary. Moreover, the self-adjoint relation <sup>A</sup>A,B in <sup>H</sup>(NA,B) <sup>⊕</sup> <sup>G</sup> in Theorem 4.4.2 and the self-adjoint relation <sup>A</sup>C,D in <sup>H</sup>(NC,D)⊕<sup>G</sup> in (4.4.11) are unitarily equivalent under the mapping <sup>U</sup>, that is, <sup>A</sup>C,D <sup>=</sup> UAA,BU∗.

Proof. In the identity (4.4.10) set Φ(λ) = X(λ)<sup>∗</sup> with X(λ) as in (4.4.9). Since X(λ) is boundedly invertible one may apply Proposition 4.1.9 and hence U in (4.4.12) is unitary. Now consider an element

$$\left\{ \begin{pmatrix} f \\ \varphi \end{pmatrix}, \begin{pmatrix} f' \\ -\varphi' \end{pmatrix} \right\} \in \tilde{A}\_{A,B},$$

so that

$$f'(\xi) - \xi f(\xi) = B(\overline{\xi})^\* \varphi - A(\overline{\xi})^\* \varphi', \quad \xi \in \mathbb{C} \backslash \mathbb{R}.$$

Then with F(ξ) = X(ξ)∗f(ξ) and F- (ξ) = X(ξ)∗f- (ξ) it follows that

$$F'(\xi) - \xi F(\xi) = X(\overline{\xi})^\* B(\overline{\xi})^\* \varphi - X(\overline{\xi})^\* A(\overline{\xi})^\* \varphi' = D(\overline{\xi})^\* \varphi - C(\overline{\xi})^\* \varphi'.$$

This implies

$$\left\{ U \begin{pmatrix} f \\ \varphi \end{pmatrix}, U \begin{pmatrix} f' \\ -\varphi' \end{pmatrix} \right\} = \left\{ \begin{pmatrix} F \\ \varphi \end{pmatrix}, \begin{pmatrix} F' \\ -\varphi' \end{pmatrix} \right\} \in \tilde{A}\_{C,D}.$$

One verifies in the same way that every element

$$\left\{ \begin{pmatrix} F \\ \varphi \end{pmatrix}, \begin{pmatrix} F' \\ -\varphi' \end{pmatrix} \right\} \in \tilde{A}\_{C,D} $$

can be written in the form

$$\left\{ U \begin{pmatrix} f \\ \varphi \end{pmatrix}, U \begin{pmatrix} f' \\ -\varphi' \end{pmatrix} \right\} \quad \text{for some} \quad \left\{ \begin{pmatrix} f \\ \varphi \end{pmatrix}, \begin{pmatrix} f' \\ -\varphi' \end{pmatrix} \right\} \in \tilde{A}\_{A,B}.$$

This shows that the self-adjoint relations <sup>A</sup>A,B and <sup>A</sup>C,D are unitarily equivalent under the mapping U; cf. Definition 1.3.7. -

The discussions in this section so far centered mainly on Nevanlinna pairs and will now be put in a slightly different context.

**Definition 4.4.5.** Let H be a Hilbert space and let R be a **B**(H)-valued function defined on <sup>C</sup> \ <sup>R</sup>. Then <sup>R</sup> is a called a generalized resolvent if it has the following properties:


With the function R one associates the kernel R<sup>R</sup>

$$
\mathsf{R}\_R(\cdot,\cdot) : \Omega \times \Omega \to \mathbf{B}(\mathfrak{H}),
$$

defined on Ω = <sup>C</sup> \ <sup>R</sup> by

$$\mathcal{R}\_R(\lambda, \mu) = \frac{R(\lambda) - R(\mu)^\*}{\lambda - \overline{\mu}} - R(\lambda)R(\mu)^\*, \quad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}, \quad \lambda \neq \overline{\mu}, \tag{4.4.13}$$

and R(λ, λ) = R- (λ) <sup>−</sup> <sup>R</sup>(λ)2, <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then clearly the kernel <sup>R</sup><sup>R</sup> is symmetric. Since λ → R(λ) is holomorphic, the mapping λ → RR(λ, μ) is holomorphic for each <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, that is, the kernel <sup>R</sup><sup>R</sup> is holomorphic. Note also that the kernel <sup>R</sup><sup>R</sup> is uniformly bounded on compact subsets of <sup>C</sup> \ <sup>R</sup> since

$$\|\mathbb{R}\_R(\lambda,\lambda)\| \le \frac{\|R(\lambda)\|}{|\mathrm{Im}\,\lambda|} + \|R(\lambda)\|^2, \quad \lambda \in \mathbb{C} \ \mathbb{R}.$$

For R<sup>R</sup> to be a reproducing kernel in the sense of Theorem 4.1.5 one needs nonnegativity.

**Lemma 4.4.6.** Let <sup>R</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(H) be a generalized resolvent. Then the kernel RR(·, ·) is nonnegative.

Proof. Introduce the pair of **B**(H)-valued functions C and D by

$$C(\lambda) = -R(\lambda) \quad \text{and} \quad D(\lambda) = I + \lambda R(\lambda), \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Since R is a generalized resolvent, a straightforward computation shows that {C, D} is a Nevanlinna pair and that the kernels satisfy

$$\mathsf{N}\_{C,D}(\lambda,\mu) = \mathsf{R}\_R(\lambda,\mu), \quad \lambda,\mu \in \mathbb{C} \backslash \mathbb{R};\tag{4.4.14}$$

cf. (4.4.1) and (4.4.13). Now it follows from Theorem 4.4.1 (with G = H) that the kernel <sup>R</sup>R(·, ·) is nonnegative. -

Let <sup>R</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(H) be a generalized resolvent. By Theorem 4.1.5, the corresponding nonnegative kernel R<sup>R</sup> induces a Hilbert space of holomorphic Hvalued functions, which will be denoted by H(RR), with inner product ·, ·; cf. Section 4.1. Recall that the reproducing kernel property

$$\langle f, \mathsf{R}\_R(\cdot, \mu)\varphi \rangle = (f(\mu), \varphi)\_{\mathfrak{H}}, \qquad \varphi \in \mathfrak{H}, \ \mu \in \mathbb{C} \ \langle \mathbb{R}, \mu \rangle$$

holds for all functions f ∈ H(RR). The following result gives a representation of the function R.

**Corollary 4.4.7.** Let <sup>R</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(H) be a generalized resolvent and let <sup>H</sup>(RR) be the associated reproducing kernel Hilbert space. Denote by P<sup>H</sup> the orthogonal projection from H(RR)⊕H onto H and let ι<sup>H</sup> be the canonical embedding of H into H(RR) ⊕ H. Then

$$\tilde{A}\_R = \left\{ \left\{ \begin{pmatrix} f \\ h \end{pmatrix}, \begin{pmatrix} f' \\ -h' \end{pmatrix} \right\} : \begin{array}{l} f, f' \in \mathfrak{H}(\mathbb{R}\_R), \ h, h' \in \mathfrak{H}, \\ f'(\xi) - \xi f(\xi) = (I + \xi R(\xi))h + R(\xi)h' \end{array} \right\}$$

is a self-adjoint relation in the Hilbert space H(RR) ⊕ H and the compressed resolvent of <sup>A</sup><sup>R</sup> onto <sup>H</sup> is given by

$$P\_{\mathfrak{H}}(\tilde{A}\_R - \lambda)^{-1} \iota\_{\mathfrak{H}} = R(\lambda), \qquad \lambda \in \mathbb{C} \ \backslash \mathbb{R}.$$

Furthermore, the self-adjoint relation <sup>A</sup><sup>R</sup> satisfies the following minimality condition:

$$\mathfrak{H}(\mathsf{R}\_R) \oplus \mathfrak{H} = \overline{\operatorname{span}} \left\{ \mathfrak{H}, \operatorname{ran} \left( \tilde{A}\_R - \lambda \right)^{-1} \iota\_{\mathfrak{H}} : \lambda \in \mathbb{C} \mid \mathbb{R} \right\}. \tag{4.4.15}$$

Proof. Let <sup>R</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(H) be a generalized resolvent and consider the Nevanlinna pair {C, D} defined by

$$\{C(\lambda), D(\lambda)\} = \{-R(\lambda), I + \lambda R(\lambda)\};$$

cf. the proof of Lemma 4.4.6. Then the kernels NC,D and R<sup>R</sup> coincide by (4.4.14) and hence one has H(NC,D) = H(RR). Now Theorem 4.4.2 (with G = H) can be applied to the Nevanlinna family <sup>τ</sup> (λ) = {C(λ), D(λ)}. It follows that <sup>A</sup><sup>R</sup> := <sup>A</sup>C,D

is a self-adjoint relation in the Hilbert space H(RR) ⊕ H and that its compressed resolvent is given by

$$P\_{\mathfrak{H}}(\tilde{A}\_R - \lambda)^{-1} \iota\_{\mathfrak{H}} = -(\tau(\lambda) + \lambda)^{-1} = R(\lambda), \qquad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

where the fact that τ (λ)+λ = {−R(λ), I} was used in the last equality. Moreover, the minimality condition (4.4.15) holds. -

By Corollary 4.4.7, every generalized resolvent can be interpreted as a compressed resolvent of a self-adjoint relation. Such compressed resolvents have been discussed briefly in the context of the Kre˘ın formula in Section 2.7 and will be further studied in Section 4.5. The next theorem complements Corollary 4.4.7 by providing equivalent conditions. In particular, generalized resolvents or, equivalently, compressed resolvents, are characterized as Stieltjes transforms of nondecreasing families of nonnegative contractions. As a simple consequence one obtains the Sz.-Nagy dilation theorem in Corollary 4.4.9.

**Theorem 4.4.8.** Let <sup>H</sup> be a Hilbert space and let <sup>R</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(H) be an operator function. Then the following statements are equivalent:


$$R(\lambda) = P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Furthermore, the self-adjoint relation <sup>A</sup> satisfies the following minimality condition:

$$\mathfrak{H} \oplus \mathfrak{K} = \overline{\operatorname{span}} \left\{ \mathfrak{H}, \operatorname{ran} \left( \tilde{A} - \lambda \right)^{-1} \iota\_{\mathfrak{H}} \, : \, \lambda \in \mathbb{C} \, \backslash \mathbb{R} \right\}.$$

(iii) There exists a nondecreasing function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(H), whose values are nonnegative contractions, such that <sup>R</sup> dΣ(t) ∈ **B**(H), <sup>R</sup> dΣ(t) ≤ 1, and

$$R(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\Sigma(t), \quad \lambda \in \mathbb{C} \,\backslash \,\mathbb{R}.$$

Proof. (i) ⇒ (ii) This follows directly from Corollary 4.4.7.

(ii) <sup>⇒</sup> (iii) Since <sup>A</sup> is self-adjoint, one can write

$$(\tilde{A} - \lambda)^{-1} = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, dE(t), \quad \lambda \in \mathbb{C} \,\,\|\, \mathbb{R},$$

with the spectral measure <sup>E</sup>(·) of <sup>A</sup>; cf. (1.5.6). The function <sup>t</sup> → <sup>E</sup>((−∞, t)) is a nondecreasing family of orthogonal projections from <sup>R</sup> to **<sup>B</sup>**(<sup>H</sup> <sup>⊕</sup> <sup>K</sup>) and one has that

$$R(\lambda) = P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}} = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, dP\_{\mathfrak{H}} E(t) \iota\_{\mathfrak{H}} \dots$$

Now define Σ(t) = PHE((−∞, t))ιH, which is a nondecreasing family of nonnegative contractions from R to **B**(H) that satisfies <sup>R</sup> dΣ(t) ∈ **B**(H) and the estimate <sup>R</sup> dΣ(t) ≤ 1.

(iii) <sup>⇒</sup> (i) It is clear that the function <sup>R</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(H) is holomorphic and satisfies <sup>R</sup>(λ) = <sup>R</sup>(λ)<sup>∗</sup> for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Moreover, it follows from Proposition A.5.4 that

$$\frac{\operatorname{Im} R(\lambda)}{\operatorname{Im} \lambda} - R(\lambda)R(\lambda)^\* = \frac{\operatorname{Im} R(\overline{\lambda})}{\operatorname{Im} \overline{\lambda}} - R(\overline{\lambda})^\* R(\overline{\lambda}) \ge 0, \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

which implies that R is a generalized resolvent. -

The next corollary is a variant of the dilation theorem, which goes back to M.A. Na˘ımark and B. Sz.-Nagy; here it is obtained from Theorem 4.4.8 and the Stieltjes inversion formula.

**Corollary 4.4.9.** Let Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(H) be a left-continuous nondecreasing function, whose values are nonnegative contractions, such that

$$\left| \int\_{\mathbb{R}} d\Sigma(t) \in \mathbf{B}(\mathfrak{H}), \quad \left| \int\_{\mathbb{R}} d\Sigma(t) \right| \right| \le 1, \quad \text{and} \quad \Sigma(-\infty) = 0.$$

Then there exist a Hilbert space K and a left-continuous nondecreasing function <sup>E</sup> : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(<sup>H</sup> <sup>⊕</sup> <sup>K</sup>), whose values are orthogonal projections, such that

$$
\Sigma(t) = P\_{\mathfrak{H}} E(t) \iota\_{\mathfrak{H}}, \qquad t \in \mathbb{R}.
$$

Proof. Associate with Σ the function

$$R(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\Sigma(t), \quad \lambda \in \mathbb{C} \,\,\|\,\mathbb{R}.\tag{4.4.16}$$

By Theorem 4.4.8, there exists a Hilbert space <sup>K</sup> and a self-adjoint relation <sup>A</sup> in <sup>H</sup> <sup>⊕</sup> <sup>K</sup> such that the compression of the resolvent of <sup>A</sup> onto <sup>H</sup> is given by

$$P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}} = R(\lambda), \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.\tag{4.4.17}$$

Let <sup>E</sup>(·) be the spectral measure of <sup>A</sup> and let <sup>t</sup> → <sup>E</sup>((−∞, t)) be the corresponding spectral function, which is left-continuous and satisfies limt→−∞ E((−∞, t)) = 0. As in the proof of Theorem 4.4.8 one has

$$P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}} = P\_{\mathfrak{H}}\left(\int\_{\mathbb{R}} \frac{1}{t - \lambda} \, dE(t)\right) \iota\_{\mathfrak{H}} = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, dP\_{\mathfrak{H}} E(t) \iota\_{\mathfrak{H}}.$$

Taking into account (4.4.16) and (4.4.17), it follows that

$$\int\_{\mathbb{R}} \frac{1}{t - \lambda} dP\_{\mathfrak{H}} E(t) \iota\_{\mathfrak{H}} = \int\_{\mathbb{R}} \frac{1}{t - \lambda} d\Sigma(t), \qquad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

and hence for all h ∈ H

$$\int\_{\mathbb{R}} \frac{1}{t - \lambda} d(E(t)\iota\_{\mathfrak{H}}h, \iota\_{\mathfrak{H}}h) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} d(\Sigma(t)h, h), \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Since the functions t → (E(t)ιHh, ιHh) and t → (Σ(t)h, h) are left-continuous, and

$$\lim\_{t \to -\infty} \left( E((-\infty, t)) \iota\_{\mathfrak{H}} h, \iota\_{\mathfrak{H}} h \right) = 0 = (\Sigma(-\infty) h, h),$$

the Stieltjes inversion formula in Corollary A.1.2 yields (E(t)ιHh, ιHh) = (Σ(t)h, h) for all <sup>t</sup> <sup>∈</sup> <sup>R</sup> and <sup>h</sup> <sup>∈</sup> <sup>H</sup>. This leads to the assertion. -

## **4.5 Kre˘ın's formula for exit space extensions**

Let S be a closed symmetric relation in the Hilbert space H, let {G, Γ0, Γ1} be a boundary triplet for S∗, A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding <sup>γ</sup>-field and Weyl function, respectively. Suppose that <sup>A</sup> is a self-adjoint extension of S in H ⊕ H- , where H is the exit space. It was shown in Theorem 2.7.4 that there exists a Nevanlinna family <sup>τ</sup> (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, in <sup>G</sup> such that

$$P\_{\mathfrak{H}}(\widetilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}} = (A\_0 - \lambda)^{-1} - \gamma(\lambda) \left( M(\lambda) + \tau(\lambda) \right)^{-1} \gamma(\overline{\lambda})^\*, \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

holds. This is Kre˘ın's formula for the compressed resolvents of self-adjoint exit space extensions (as studied by M. A. Na˘ımark); it is also referred to as Kre˘ın– Na˘ımark formula in this text; cf. Section 2.7.

The goal of this section is to show the converse statement. More precisely, it will be proved that for every Nevanlinna family <sup>τ</sup> (λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, in the Hilbert space <sup>G</sup> there exists a self-adjoint exit space extension <sup>A</sup> of <sup>S</sup> such that the compressed resolvent of <sup>A</sup> onto <sup>H</sup> is given by the Kre˘ın–Na˘ımark formula. The following result is a first step.

**Lemma 4.5.1.** Let S be a closed symmetric relation in H, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding γ-field and Weyl function, respectively. Let τ = {A, B} be a Nevanlinna family in <sup>G</sup> and define <sup>R</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, by

$$R(\lambda) = (A\_0 - \lambda)^{-1} - \gamma(\lambda) \left( M(\lambda) + \tau(\lambda) \right)^{-1} \gamma(\overline{\lambda})^\*. \tag{4.5.1}$$

Then the kernel

$$R\_R(\lambda, \mu) = \frac{R(\lambda) - R(\mu)^\*}{\lambda - \overline{\mu}} - R(\lambda)R(\mu)^\*, \quad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}, \quad \lambda \neq \overline{\mu}, \tag{4.5.2}$$

satisfies

$$\mathcal{R}\_R(\lambda,\mu) = W(\lambda)\mathbb{N}\_{A,B}(\lambda,\mu)W(\mu)^\*,\tag{4.5.3}$$

where

$$W(\lambda) = \gamma(\lambda) \left( M(\overline{\lambda}) A(\overline{\lambda}) + B(\overline{\lambda}) \right)^{-\ast}. \tag{4.5.4}$$

In particular, the kernel R<sup>R</sup> is nonnegative, symmetric, holomorphic, and uniformly bounded on compact subsets of <sup>C</sup> \ <sup>R</sup>.

Proof. Step 1. For <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> introduce the following notations

$$R\_0(\lambda) = (A\_0 - \lambda)^{-1} \quad \text{and} \quad Q(\lambda) = \left(M(\lambda) + \tau(\lambda)\right)^{-1},$$

so that R in (4.5.1) is given by

$$R(\lambda) = R\_0(\lambda) - \gamma(\lambda)Q(\lambda)\gamma(\lambda)^\*.$$

Rewrite the kernel RR(·, ·) in (4.5.2) in terms of this notation:

$$\begin{split} R\_R(\lambda,\mu) &= \frac{1}{\lambda - \overline{\mu}} \Big( R\_0(\lambda) - R\_0(\mu)^\* - \gamma(\lambda)Q(\lambda)\gamma(\overline{\lambda})^\* + \gamma(\overline{\mu})Q(\mu)^\*\gamma(\mu)^\* \Big) \\ &- \Big( R\_0(\lambda) - \gamma(\lambda)Q(\lambda)\gamma(\overline{\lambda})^\* \Big) \Big( R\_0(\mu)^\* - \gamma(\overline{\mu})Q(\mu)^\*\gamma(\mu)^\* \Big) \\ &= \frac{1}{\lambda - \overline{\mu}} \Big( -\gamma(\lambda)Q(\lambda)\gamma(\overline{\lambda})^\* + \gamma(\overline{\mu})Q(\mu)^\*\gamma(\mu)^\* \Big) \\ &+ R\_0(\lambda)\gamma(\overline{\mu})Q(\mu)^\*\gamma(\mu)^\* + \gamma(\lambda)Q(\lambda)\gamma(\overline{\lambda})^\*R\_0(\mu)^\* \\ &- \gamma(\lambda)Q(\lambda)\gamma(\overline{\lambda})^\*\gamma(\overline{\mu})Q(\mu)^\*\gamma(\mu)^\*. \end{split}$$

Recall that, by Proposition 2.3.2 (ii) and Proposition 2.3.6 (iii),

$$R\_0(\lambda)\gamma(\overline{\mu}) = \frac{\gamma(\lambda) - \gamma(\overline{\mu})}{\lambda - \overline{\mu}}, \quad \gamma(\overline{\lambda})^\* R\_0(\mu)^\* = \frac{\gamma(\overline{\lambda})^\* - \gamma(\mu)^\*}{\lambda - \overline{\mu}},$$

and

$$\gamma(\overline{\lambda})^\* \gamma(\overline{\mu}) = \left(\gamma(\overline{\mu})^\* \gamma(\overline{\lambda})\right)^\* = \left(\frac{M(\overline{\lambda}) - M(\overline{\mu})^\*}{\overline{\lambda} - \mu}\right)^\* = \frac{M(\lambda) - M(\mu)^\*}{\lambda - \overline{\mu}}.$$

Therefore, the kernel RR(·, ·) has the form

$$\begin{split} \mathcal{R}\_R(\lambda, \mu) = \frac{1}{\lambda - \overline{\mu}} \gamma(\lambda) \left[ Q(\mu)^\* - Q(\lambda) \right. \\ \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \left. \ldots \right. \right. \right. \right. \right. \right. \right. \right) \right. \right. \right| \right) \right| \right] + \left. \left. \left( \lambda, \mu \right) \right) \right] \\ \left. \left. \left( \lambda, \mu \right) \right) \left( Q(\mu)^\* - Q(\lambda) \right) \right] \right| \right) \left. \left. \left( \lambda, \mu \right) \right] \end{split} \tag{4.5.5}$$

Step 2. Express the identity (4.5.5) in terms of the Nevanlinna pair {A, B}, representing the Nevanlinna family τ . For this, consider the equivalent Nevanlinna pair {C, D} as in Lemma 4.4.3, that is,

$$C(\lambda) = A(\lambda)X(\lambda) \quad \text{and} \quad D(\lambda) = B(\lambda)X(\lambda),$$

where X(λ)=(B(λ) + λA(λ))−<sup>1</sup>, so that

$$\mathcal{N}\_{C,D}(\lambda,\mu) = \frac{D(\lambda)C(\mu)^\* - C(\lambda)D(\mu)^\*}{\lambda - \overline{\mu}}.\tag{4.5.6}$$

Observe that Q(λ) can be written in terms of τ = {C, D} as

$$Q(\lambda) = C(\lambda) \left( M(\lambda)C(\lambda) + D(\lambda) \right)^{-1};$$

cf. (1.12.10). It follows that

$$Q(\lambda) = Q(\overline{\lambda})^\* = \left(C(\lambda)M(\lambda) + D(\lambda)\right)^{-1}C(\lambda) \tag{4.5.7}$$

and

$$Q(\mu)^{\*} = Q(\overline{\mu}) = C(\mu)^{\*} \left( C(\mu)M(\mu) + D(\mu) \right)^{-\*}.\tag{4.5.8}$$

Inserting the expressions (4.5.7) and (4.5.8) in (4.5.5) one arrives after a straightforward computation at the identity

$$\mathcal{R}\_R(\lambda,\mu) = Z(\lambda)\mathbb{N}\_{C,D}(\lambda,\mu)Z(\mu)^\*,$$

where the factor Z(λ) is given by

$$Z(\lambda) = \gamma(\lambda) \left( C(\lambda) M(\lambda) + D(\lambda) \right)^{-1}. \tag{4.5.9}$$

Recall that NA,B and NC,D are related via (4.4.10). Therefore, with (4.5.6) and (4.5.9) one obtains the identity (4.5.3), where

$$W(\lambda) = \gamma(\lambda) \left( C(\lambda)M(\lambda) + D(\lambda) \right)^{-1} \left( B(\overline{\lambda}) + \overline{\lambda}A(\overline{\lambda}) \right)^{-\*}.\tag{4.5.10}$$

The proof is finished by writing the normalized pair {C, D} in (4.5.10) in terms of the pair {A, B}. Observe that since <sup>X</sup>(λ)=(B(λ) + λA(λ))−1, the symmetry property B(λ)∗A(λ) = A(λ)∗B(λ) of the Nevanlinna pair yields

$$\begin{aligned} \left(B(\overline{\lambda})^{\*} + \lambda A(\overline{\lambda})^{\*}\right) \left(C(\lambda)M(\lambda) + D(\lambda)\right) \\ &= \left(B(\overline{\lambda})^{\*} + \lambda A(\overline{\lambda})^{\*}\right) \left(A(\lambda)X(\lambda)M(\lambda) + B(\lambda)X(\lambda)\right) \\ &= A(\overline{\lambda})^{\*} \left(B(\lambda) + \lambda A(\lambda)\right) X(\lambda)M(\lambda) + B(\overline{\lambda})^{\*} \left(B(\lambda) + \lambda A(\lambda)\right) X(\lambda)^{\*} \\ &= A(\overline{\lambda})^{\*}M(\lambda) + B(\overline{\lambda})^{\*} \\ &= \left(M(\overline{\lambda})A(\overline{\lambda}) + B(\overline{\lambda})\right)^{\*}, \end{aligned}$$

which gives (4.5.4).

It follows from (4.5.3) and (4.5.4) that the kernel R<sup>R</sup> in (4.5.2) is nonnegative, symmetric, holomorphic, and uniformly bounded on compact subsets of <sup>C</sup> \ <sup>R</sup>. -

Lemma 4.5.1 shows that RR(·, ·) is a reproducing kernel. Therefore, one may apply Theorem 4.4.8.

**Theorem 4.5.2.** Let S be a closed symmetric relation, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding γ-field and Weyl function, respectively. Let τ be a Nevanlinna family in G. Then there exist an exit Hilbert space H and a self-adjoint relation <sup>A</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup> such that <sup>A</sup> is an extension of <sup>S</sup> and the compressed resolvent of <sup>A</sup> is given by the Kre˘ın–Na˘ımark formula:

$$P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}} = (A\_0 - \lambda)^{-1} - \gamma(\lambda) \left( M(\lambda) + \tau(\lambda) \right)^{-1} \gamma(\overline{\lambda})^\*, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{4.5.11}$$

Furthermore, the self-adjoint relation <sup>A</sup> satisfies the following minimality condition:

$$\mathfrak{H} \oplus \mathfrak{H}' = \overline{\text{span}} \left\{ \mathfrak{H}, \text{ran} \left( \tilde{A} - \mu \right)^{-1} \iota\_{\mathfrak{H}} : \mu \in \mathbb{C} \mid \mathbb{R} \right\}. \tag{4.5.12}$$

Proof. Define the function R as in Lemma 4.5.1. Then R is a **B**(H)-valued holomorphic function on <sup>C</sup> \ <sup>R</sup> which satisfies <sup>R</sup>(λ) = <sup>R</sup>(λ)<sup>∗</sup> and so, by Lemma 4.5.1, R is a generalized resolvent. Hence, by Theorem 4.4.8, the function R is a compressed resolvent, that is, there exist a Hilbert space H and a self-adjoint relation <sup>A</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup>such that

$$R(\lambda) = P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R};$$

this implies (4.5.11) Moreover, it follows from Theorem 4.4.8 that <sup>A</sup> satisfies the minimality condition (4.5.12).

It remains to prove that <sup>S</sup> <sup>⊂</sup> <sup>A</sup>. Observe first that by (4.5.11) the Straus <sup>ˇ</sup> family corresponding to <sup>A</sup> satisfies

$$\begin{aligned} T(\lambda) &= \left\{ \left\{ R(\lambda)h, (I + \lambda R(\lambda))h \right\} : h \in \mathfrak{H} \right\} \\ &\subset \left\{ R\_0(\lambda)h, (I + \lambda R\_0(\lambda))h : h \in \mathfrak{H} \right\} + \left\{ \left\{ \gamma(\lambda)\varphi, \lambda\gamma(\lambda)\varphi \right\} : \varphi \in \mathfrak{G} \right\}. \end{aligned}$$

Since each relation on the right-hand side is contained in S∗, so is the relation T(λ). As T(λ)<sup>∗</sup> = T(λ), it follows that S ⊂ T(λ). Now let {f,f- } ∈ S so that for each <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> there exists <sup>h</sup> <sup>∈</sup> <sup>H</sup>such that

$$\left\{ \begin{pmatrix} f \\ h \end{pmatrix}, \begin{pmatrix} f' \\ \lambda h \end{pmatrix} \right\} \in \tilde{A}.$$

The relation <sup>A</sup> is self-adjoint and, in particular, symmetric. Therefore, one sees that (f- , f) + <sup>λ</sup>(h, h) <sup>∈</sup> <sup>R</sup>, while by definition (f- , f) <sup>∈</sup> <sup>R</sup>. Since <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, it follows that h = 0, and thus

$$\left\{ \begin{pmatrix} f \\ 0 \end{pmatrix}, \begin{pmatrix} f' \\ 0 \end{pmatrix} \right\} \in \tilde{A}.$$

This shows that <sup>S</sup> <sup>⊂</sup> <sup>A</sup>. -

## **4.6 Orthogonal coupling of boundary triplets**

In this section a different look is taken at the Kre˘ın–Na˘ımark formula. By means of an abstract coupling method for direct orthogonal sums of symmetric relations and corresponding boundary triplets, a particular self-adjoint extension <sup>A</sup> of the direct sum is identified, and it is shown that the compressed resolvent of <sup>A</sup> is of the same form as in the Kre˘ın–Na˘ımark formula. When combined with Theorem 4.2.4, this coupling procedure provides a constructive approach to the exit space extension in Theorem 4.5.2 in the special case where the Nevanlinna family τ is a uniformly strict Nevanlinna function.

First a slightly more general, abstract point of view is adopted. In the following let S and T be closed symmetric relations in the Hilbert spaces H and H- , respectively, and assume that the defect numbers of S and T coincide:

$$n\_+(S) = n\_-(S) = n\_+(T) = n\_-(T) \le \infty.$$

Let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ<sup>0</sup> and let {G, Γ- <sup>0</sup>, Γ- 1} be a boundary triplet for T <sup>∗</sup> with B<sup>0</sup> = ker Γ- <sup>0</sup>. Then it is easy to see that the direct orthogonal sum <sup>S</sup> <sup>⊕</sup> <sup>T</sup> is a closed symmetric relation in <sup>H</sup>⊕H and {G⊕G, <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\widetilde{\Gamma}\_0 \begin{pmatrix} \widehat{f} \\ \widehat{g} \end{pmatrix} = \begin{pmatrix} \Gamma\_0 \widehat{f} \\ \Gamma\_0' \widehat{g} \end{pmatrix} \quad \text{and} \quad \widetilde{\Gamma}\_1 \begin{pmatrix} \widehat{f} \\ \widehat{g} \end{pmatrix} = \begin{pmatrix} \Gamma\_1 \widehat{f} \\ \Gamma\_1' \widehat{g} \end{pmatrix}, \quad \widehat{f} \in S^\*, \widehat{g} \in T^\*, \tag{4.6.1}
$$

is a boundary triplet for (<sup>S</sup> <sup>⊕</sup> <sup>T</sup>)<sup>∗</sup> <sup>=</sup> <sup>S</sup><sup>∗</sup> <sup>⊕</sup> <sup>T</sup> <sup>∗</sup>, and that

$$
\dot{A}\_0 := A\_0 \hat{\oplus} B\_0 = \ker \dot{\Gamma}\_0 \tag{4.6.2}
$$

is a self-adjoint extension of <sup>S</sup> <sup>⊕</sup> <sup>T</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup>- . Furthermore, if γ and γ denote the γ-fields corresponding to the boundary triplets {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- 1}, and M and τ are the Weyl functions corresponding to {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- 1}, respectively, then it is clear that for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) = <sup>ρ</sup>(A0) <sup>∩</sup> <sup>ρ</sup>(B0) the <sup>γ</sup>-field <sup>γ</sup> and the Weyl function M <sup>5</sup> corresponding to the boundary triplet {<sup>G</sup> <sup>⊕</sup> <sup>G</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} have the forms

$$
\widetilde{\gamma}(\lambda) = \begin{pmatrix} \gamma(\lambda) & 0 \\ 0 & \gamma'(\lambda) \end{pmatrix} \quad \text{and} \quad \widetilde{M}(\lambda) = \begin{pmatrix} M(\lambda) & 0 \\ 0 & \tau(\lambda) \end{pmatrix} . \tag{4.6.3}
$$

Let <sup>A</sup> be a self-adjoint extension of <sup>S</sup> <sup>⊕</sup> <sup>T</sup> in <sup>H</sup> <sup>⊕</sup> <sup>H</sup>- . Then Kre˘ın's formula in Theorem 2.6.1 has the form

$$(\tilde{A} - \lambda)^{-1} = (\tilde{A}\_0 - \lambda)^{-1} + \tilde{\gamma}(\lambda) \left(\tilde{\Theta} - \widehat{M}(\lambda)\right)^{-1} \tilde{\gamma}(\tilde{\lambda})^\*$$

for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A) <sup>∩</sup> <sup>ρ</sup>(A0), where <sup>γ</sup> and <sup>M</sup> <sup>5</sup> denote the <sup>γ</sup>-field and the Weyl function corresponding to the boundary triplet {<sup>G</sup> <sup>⊕</sup> <sup>G</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} in (4.6.3). If Θ = {A, <sup>B</sup>} with A, B ∈ **B**(G ⊕ G), then

$$(\tilde{A} - \lambda)^{-1} = (\tilde{A}\_0 - \lambda)^{-1} - \tilde{\gamma}(\lambda)\mathcal{A}\left(\widetilde{M}(\lambda)\mathcal{A} - \mathcal{B}\right)^{-1}\tilde{\gamma}(\tilde{\lambda})^\*,$$

see Corollary 2.6.3. Writing A and B as block operators

$$\mathcal{A} = \begin{pmatrix} \mathcal{A}\_{11} & \mathcal{A}\_{12} \\ \mathcal{A}\_{21} & \mathcal{A}\_{22} \end{pmatrix} \quad \text{and} \quad \mathcal{B} = \begin{pmatrix} \mathcal{B}\_{11} & \mathcal{B}\_{12} \\ \mathcal{B}\_{21} & \mathcal{B}\_{22} \end{pmatrix},$$

where Aij , Bij ∈ **B**(G), it follows that

$$
\widetilde{M}(\lambda)\mathcal{A} - \mathcal{B} = \begin{pmatrix} M(\lambda)\mathcal{A}\_{11} - \mathcal{B}\_{11} & M(\lambda)\mathcal{A}\_{12} - \mathcal{B}\_{12} \\ \tau(\lambda)\mathcal{A}\_{21} - \mathcal{B}\_{21} & \tau(\lambda)\mathcal{A}\_{22} - \mathcal{B}\_{22} \end{pmatrix},
$$

so that

$$\mathcal{A}\left(\widetilde{M}(\lambda)\mathcal{A}-\mathcal{B}\right)^{-1} = \begin{pmatrix} \mathcal{A}\_{11} & \mathcal{A}\_{12} \\ \mathcal{A}\_{21} & \mathcal{A}\_{22} \end{pmatrix} \begin{pmatrix} M(\lambda)\mathcal{A}\_{11} - \mathcal{B}\_{11} & M(\lambda)\mathcal{A}\_{12} - \mathcal{B}\_{12} \\ \tau(\lambda)\mathcal{A}\_{21} - \mathcal{B}\_{21} & \tau(\lambda)\mathcal{A}\_{22} - \mathcal{B}\_{22} \end{pmatrix}^{-1}.$$

The following proposition exhibits a particular self-adjoint extension of <sup>S</sup> <sup>⊕</sup> <sup>T</sup> in H ⊕ H- .

**Proposition 4.6.1.** Let S and T be closed symmetric relations in the Hilbert spaces H and H with boundary triplets {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} as above, respectively. Then

$$\tilde{A} = \left\{ \begin{pmatrix} \hat{f} \\ \hat{g} \end{pmatrix} : \hat{f} \in S^\*, \hat{g} \in T^\*, \ \Gamma\_0 \hat{f} = \Gamma\_0' \hat{g}, \ \Gamma\_1 \hat{f} = -\Gamma\_1' \hat{g} \right\} \tag{4.6.4}$$

is a self-adjoint relation in H ⊕ H and for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the resolvent of <sup>A</sup> has the form

$$(\tilde{A} - \lambda)^{-1} = (\tilde{A}\_0 - \lambda)^{-1} - \tilde{\gamma}(\lambda) \begin{pmatrix} (M(\lambda) + \tau(\lambda))^{-1} & (M(\lambda) + \tau(\lambda))^{-1} \\ (M(\lambda) + \tau(\lambda))^{-1} & (M(\lambda) + \tau(\lambda))^{-1} \end{pmatrix} \tilde{\gamma}(\tilde{\lambda})^\*,$$

where <sup>A</sup><sup>0</sup> and <sup>γ</sup> are as in (4.6.2), and <sup>M</sup> and <sup>τ</sup> denote the Weyl functions corresponding to {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>}, respectively.

Proof. Consider the boundary triplet {<sup>G</sup> <sup>⊕</sup> <sup>G</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} in (4.6.1) and observe that the relation

$$\tilde{\Theta} := \left\{ \left\{ \begin{pmatrix} \varphi \\ \varphi \end{pmatrix}, \begin{pmatrix} \psi \\ -\psi \end{pmatrix} \right\} : \varphi, \psi \in \mathfrak{G} \right\} \tag{4.6.5}$$

is self-adjoint in G ⊕ G. Hence, by Corollary 2.1.4 and (4.6.1),

$$\left\{ \begin{pmatrix} \widehat{f} \\ \widehat{g} \end{pmatrix} \in S^\* \oplus T^\* : \widetilde{\Gamma} \begin{pmatrix} \widehat{f} \\ \widehat{g} \end{pmatrix} = \left\{ \begin{pmatrix} \Gamma\_0 \widehat{f} \\ \Gamma\_0' \widehat{g} \end{pmatrix}, \begin{pmatrix} \Gamma\_1 \widehat{f} \\ \Gamma\_1' \widehat{g} \end{pmatrix} \right\} \in \widetilde{\Theta} \right\} \subset S^\* \oplus T^\* \tag{4.6.6}$$

is a self-adjoint relation H ⊕ H- . Now it follows from the particular form of Θ in (4.6.5) that the self-adjoint relation in (4.6.6) coincides with <sup>A</sup> in (4.6.4).

Next the resolvent of <sup>A</sup> will be computed. Recall first that Kre˘ın's formula in Theorem 2.6.1 implies

$$(\tilde{A} - \lambda)^{-1} = (\tilde{A}\_0 - \lambda)^{-1} - \tilde{\gamma}(\lambda) \left(\tilde{\Theta} - \hat{M}(\lambda)\right)^{-1} \tilde{\gamma}(\tilde{\lambda})^\* \tag{4.6.7}$$

for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A) <sup>∩</sup> <sup>ρ</sup>(A<sup>0</sup>), where <sup>γ</sup> and <sup>M</sup> <sup>5</sup> denote the <sup>γ</sup>-field and the Weyl function corresponding to {<sup>G</sup> <sup>⊕</sup> <sup>G</sup>, <sup>Γ</sup><sup>0</sup>, <sup>Γ</sup><sup>1</sup>} in (4.6.3). From (4.6.5) and (4.6.3) one obtains

$$\left(\widetilde{\Theta} - \widetilde{M}(\lambda)\right)^{-1} = \left\{ \left\{ \begin{pmatrix} \psi - M(\lambda)\varphi\\ -\psi - \tau(\lambda)\varphi \end{pmatrix}, \begin{pmatrix} \varphi\\ \varphi \end{pmatrix} \right\} : \varphi, \psi \in \mathcal{G} \right\} \dots$$

Setting φ := ψ −M(λ)ϕ and χ := −ψ − τ (λ)ϕ one has φ + χ = −(M(λ) + τ (λ))ϕ. For <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> it follows from Lemma 1.11.5 (see also Proposition 1.12.6) that (M(λ) + <sup>τ</sup> (λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G), and hence

$$\varphi = -\left(M(\lambda) + \tau(\lambda)\right)^{-1}\phi - \left(M(\lambda) + \tau(\lambda)\right)^{-1}\chi, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

This yields

$$\begin{pmatrix} \widetilde{\Theta} - \widetilde{M}(\lambda) \end{pmatrix}^{-1} = - \begin{pmatrix} (M(\lambda) + \tau(\lambda))^{-1} & (M(\lambda) + \tau(\lambda))^{-1} \\ (M(\lambda) + \tau(\lambda))^{-1} & (M(\lambda) + \tau(\lambda))^{-1} \end{pmatrix},$$

and the statement about the resolvent of <sup>A</sup> follows from (4.6.7). -

The compressions of the resolvent of the self-adjoint relation <sup>A</sup> in (4.6.4) to H and H are of interest. Note that the resolvent of <sup>A</sup><sup>0</sup> in (4.6.2) is given by the direct orthogonal sum of the resolvents of <sup>A</sup><sup>0</sup> and <sup>B</sup>0, and hence for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) the compressions to H and Hare

$$P\_{\mathfrak{H}}(\tilde{A}\_0 - \lambda)^{-1} \iota\_{\mathfrak{H}} = (A\_0 - \lambda)^{-1} \quad \text{and} \quad P\_{\mathfrak{H}'}(\tilde{A}\_0 - \lambda)^{-1} \iota\_{\mathfrak{H}'} = (B\_0 - \lambda)^{-1},$$

respectively. The next statement follows directly from Proposition 4.6.1 and (4.6.3).

**Corollary 4.6.2.** Let S and T be closed symmetric relations in the Hilbert spaces H and H with boundary triplets {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>}, and corresponding γ-fields and Weyl functions γ, γ and M, τ , respectively. Then for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the following statements hold:

(i) The compression of the resolvent of the self-adjoint relation <sup>A</sup> in (4.6.4) to H is given by

$$P\_{\mathfrak{H}}(\widetilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}} = (A\_0 - \lambda)^{-1} - \gamma(\lambda) \left( M(\lambda) + \tau(\lambda) \right)^{-1} \gamma(\overline{\lambda})^\*.$$

(ii) The compression of the resolvent of the self-adjoint relation <sup>A</sup> in (4.6.4) to His given by

$$P\_{\mathfrak{H}'} (\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}'} = (B\_0 - \lambda)^{-1} - \gamma'(\lambda) \left( M(\lambda) + \tau(\lambda) \right)^{-1} \gamma'(\overline{\lambda})^\* .$$

Corollary 4.6.2 and Proposition 4.6.1 can also be viewed as an alternative approach to the Kre˘ın–Na˘ımark formula in the special case where the Nevanlinna family τ in Theorem 4.5.2 is a uniformly strict Nevanlinna function. In fact, according to Theorem 4.2.4 every uniformly strict **B**(G)-valued Nevanlinna function

$$\mathbb{D}$$

can be realized as a Weyl function, that is, there exist a (reproducing kernel) Hilbert space H- (= H(N<sup>τ</sup> )), a closed simple symmetric operator T (= S<sup>τ</sup> ) in H- , and a boundary triplet {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} for the adjoint T <sup>∗</sup> such that τ is the corresponding Weyl function. In this situation the relation <sup>A</sup> in (4.6.4) is self-adjoint in H ⊕ H- = H ⊕ H(N<sup>τ</sup> ) and its compressed resolvent in Corollary 4.6.2 coincides with the one in the Kre˘ın–Na˘ımark formula in Theorem 4.5.2. Summing up, the following special case of Theorem 4.5.2 is a consequence of the coupling method in Proposition 4.6.1 and Corollary 4.6.2.

**Corollary 4.6.3.** Let S be a closed symmetric relation, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ0, and let γ and M be the corresponding γ-field and Weyl function, respectively. Let τ be a uniformly strict **B**(G)-valued Nevanlinna function. Then there exist an exit Hilbert space H and a self-adjoint relation <sup>A</sup> in H ⊕ H such that <sup>A</sup> is an extension of <sup>S</sup> and the compressed resolvent of <sup>A</sup> is given by the Kre˘ın–Na˘ımark formula:

$$P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}} = (A\_0 - \lambda)^{-1} - \gamma(\lambda) \left( M(\lambda) + \tau(\lambda) \right)^{-1} \gamma(\overline{\lambda})^\*, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Furthermore, the self-adjoint relation <sup>A</sup> satisfies the minimality condition

$$\mathfrak{H} \oplus \mathfrak{H}' = \overline{\text{span}} \left\{ \mathfrak{H}, \text{ran} \left( \tilde{A} - \mu \right)^{-1} \iota\_{\mathfrak{H}} : \mu \in \mathbb{C} \backslash \mathbb{R} \right\}. \tag{4.6.8}$$

Proof. All statements except the minimality condition (4.6.8) follow from Proposition 4.6.1, Corollary 4.6.2, and Theorem 4.2.4 as explained above. For (4.6.8) recall first that the closed symmetric operator S<sup>τ</sup> (= T) in Theorem 4.2.4 is simple, and hence

$$\mathfrak{H}' = \overline{\text{span}}\left\{ \text{ker}\left( T^\* - \mu \right) : \mu \in \mathbb{C} \mid \mathbb{R} \right\} = \overline{\text{span}}\left\{ \text{ran}\,\gamma'(\mu) : \mu \in \mathbb{C} \mid \mathbb{R} \right\}.\tag{4.6.9}$$

It follows from Proposition 4.6.1 that

$$P\_{\mathfrak{H}'} (\tilde{A} - \mu)^{-1} \iota\_{\mathfrak{H}} = -\gamma'(\mu) \left( M(\mu) + \tau(\mu) \right)^{-1} \gamma(\tilde{\mu})^\*,$$

and since ran γ(μ)<sup>∗</sup> = G and dom (M(μ) + τ (μ)) = G, one sees that

$$\text{ran}\left(P\_{\mathfrak{H}'} (\tilde{A} - \mu)^{-1} \iota\_{\mathfrak{H}}\right) = \text{ran}\,\gamma'(\mu), \qquad \mu \in \mathbb{C} \backslash \mathbb{R}.$$

With (4.6.9) one then concludes that

$$\mathfrak{H}' = \overline{\text{span}} \left\{ \text{ran} \left( P\_{\mathfrak{H}'} (\tilde{A} - \mu)^{-1} \iota\_{\mathfrak{H}} \right) : \mu \in \mathbb{C} \backslash \mathbb{R} \right\},$$

which in turn yields (4.6.8). -

In the next proposition a particular boundary triplet {<sup>G</sup> <sup>⊕</sup> <sup>G</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} is specified such that the self-adjoint relation <sup>A</sup> in (4.6.4) coincides with the kernel of the boundary mapping <sup>Γ</sup>0. The corresponding Weyl function <sup>M</sup> is useful for the spectral analysis of <sup>A</sup>; cf. Chapter 6.

**Proposition 4.6.4.** Let S and T be closed symmetric relations in the Hilbert spaces H and H with boundary triplets {G, Γ0, Γ1} and {G, Γ- <sup>0</sup>, Γ- <sup>1</sup>} and corresponding Weyl functions M and τ , respectively, as in the beginning of this section. Then {<sup>G</sup> <sup>⊕</sup> <sup>G</sup>, <sup>Γ</sup><sup>0</sup>, <sup>Γ</sup><sup>1</sup>}, where

$$
\widehat{\Gamma}\_0 \begin{pmatrix} \widehat{f} \\ \widehat{g} \end{pmatrix} = \begin{pmatrix} -\Gamma\_1 \widehat{f} - \Gamma\_1' \widehat{g} \\ \Gamma\_0 \widehat{f} - \Gamma\_0' \widehat{g} \end{pmatrix} \quad \text{and} \quad \widehat{\Gamma}\_1 \begin{pmatrix} \widehat{f} \\ \widehat{g} \end{pmatrix} = \begin{pmatrix} \Gamma\_0 \widehat{f} \\ -\Gamma\_1' \widehat{g} \end{pmatrix}, \quad \widehat{f} \in S^\*, \widehat{g} \in T^\*,
$$

is a boundary triplet for <sup>S</sup><sup>∗</sup> <sup>⊕</sup> <sup>T</sup> <sup>∗</sup> such that the self-adjoint relation <sup>A</sup> in (4.6.4) corresponds to the boundary mapping <sup>Γ</sup>0, that is,

$$
\bar{A} = \ker \bar{\Gamma}\_0.
$$

The Weyl function of {<sup>G</sup> <sup>⊕</sup> <sup>G</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} is given by

$$\begin{aligned} \widehat{M}(\lambda) &= -\begin{pmatrix} M(\lambda) & -I \\ -I & -\tau(\lambda)^{-1} \end{pmatrix}^{-1} \\ &= \begin{pmatrix} -(M(\lambda) + \tau(\lambda))^{-1} & (M(\lambda) + \tau(\lambda))^{-1}\tau(\lambda) \\ \tau(\lambda)(M(\lambda) + \tau(\lambda))^{-1} & \tau(\lambda)(M(\lambda) + \tau(\lambda))^{-1}M(\lambda) \end{pmatrix} \end{aligned} \tag{4.6.10}$$

for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

Proof. Instead of a direct proof the assertions will be obtained as consequences of the results in Section 2.5. For this consider the boundary triplet {<sup>G</sup> <sup>⊕</sup> <sup>G</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} in (4.6.1) with the Weyl function M <sup>5</sup> given in (4.6.3), let

$$\mathcal{A} = \frac{1}{\sqrt{2}} \begin{pmatrix} I & 0 \\ I & 0 \end{pmatrix} \quad \text{and} \quad \mathcal{B} = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & I \\ 0 & -I \end{pmatrix},$$

and observe that Θ = {A, <sup>B</sup>}, with Θ in ( 4.6.5). It is easy to see that <sup>A</sup> and <sup>B</sup> satisfy the conditions in Corollary 2.5.11. Therefore, {G2, Γ˘0, Γ˘1}, where

$$
\check{\Gamma}\_0 = \mathcal{B}^\* \widetilde{\Gamma}\_0 - \mathcal{A}^\* \widetilde{\Gamma}\_1 \quad \text{and} \quad \check{\Gamma}\_1 = \mathcal{A}^\* \widetilde{\Gamma}\_0 + \mathcal{B}^\* \widetilde{\Gamma}\_1,
$$

is a boundary triplet with corresponding Weyl function

$$\check{M}(\lambda) = \left(\mathcal{A}^\* + \mathcal{B}^\* \widehat{M}(\lambda)\right) \left(\mathcal{B}^\* - \mathcal{A}^\* \widehat{M}(\lambda)\right)^{-1}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

It follows that

$$
\overset{\circ}{\Gamma}\_0 \begin{pmatrix} \widehat{f} \\ \widehat{g} \end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} -\Gamma\_1 \widehat{f} - \Gamma\_1' \widehat{g} \\ \Gamma\_0 \widehat{f} - \Gamma\_0' \widehat{g} \end{pmatrix}, \quad \widehat{f} \in S^\*, \widehat{g} \in T^\*, \widehat{g}
$$

and

$$
\widehat{\Gamma}\_1 \begin{pmatrix} \widehat{f} \\ \widehat{g} \end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} \Gamma\_0 \widehat{f} + \Gamma\_0' \widehat{g} \\ \Gamma\_1 \widehat{f} - \Gamma\_1' \widehat{g} \end{pmatrix}, \qquad \widehat{f} \in S^\*, \widehat{g} \in T^\*.
$$

Furthermore, it is easily seen from the above that

$$\begin{aligned} \check{M}(\lambda) &= \begin{pmatrix} 1 & 1 \\ M(\lambda) & -\tau(\lambda) \end{pmatrix} \begin{pmatrix} -M(\lambda) & -\tau(\lambda) \\ 1 & -1 \end{pmatrix}^{-1} \\ &= \begin{pmatrix} 1 & 1 \\ M(\lambda) & -\tau(\lambda) \end{pmatrix} \begin{pmatrix} -(M(\lambda) + \tau(\lambda))^{-1} & (M(\lambda) + \tau(\lambda))^{-1}\tau(\lambda) \\ -(M(\lambda) + \tau(\lambda))^{-1} & -(M(\lambda) + \tau(\lambda))^{-1}M(\lambda) \end{pmatrix}, \end{aligned}$$

where the last step used the identity

$$M(\lambda)(M(\lambda) + \tau(\lambda))^{-1}\tau(\lambda) = \tau(\lambda)(M(\lambda) + \tau(\lambda))^{-1}M(\lambda).$$

Thus, it is clear that

$$
\check{M}(\lambda) = \begin{pmatrix}
(\tau(\lambda) - M(\lambda))(M(\lambda) + \tau(\lambda))^{-1} & 2M(\lambda)(M(\lambda + \tau(\lambda))^{-1}\tau(\lambda))
\end{pmatrix}
$$

holds for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Now let

$$D = \frac{1}{\sqrt{2}} \begin{pmatrix} I & 0 \\ 0 & I \end{pmatrix} = D^\* \quad \text{and} \quad P = \frac{1}{2} \begin{pmatrix} 0 & I \\ I & 0 \end{pmatrix},$$

and apply Corollary 2.5.5 to conclude that

$$
\widehat{\Gamma}\_0 = D^{-1} \check{\Gamma}\_0 \quad \text{and} \quad \widehat{\Gamma}\_1 = D^\* \check{\Gamma}\_1 + PD^{-1} \check{\Gamma}\_0.
$$

give a boundary triplet for <sup>S</sup><sup>∗</sup> <sup>⊕</sup> <sup>T</sup> <sup>∗</sup>. According to Corollary 2.5.5, the corresponding Weyl function is given by

$$
\widehat{M}(\lambda) = D^\* \check{M}(\lambda) D + P, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R},
$$

and one verifies that the first identity in (4.6.10) holds. It is straightforward to check the second identity in (4.6.10). -

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 5**

## **Boundary Triplets and Boundary Pairs for Semibounded Relations**

Semibounded relations in a Hilbert space automatically have equal defect numbers, so that there are always self-adjoint extensions. In this chapter the semibounded self-adjoint extensions of a semibounded relation will be investigated. Special attention will be paid to the Friedrichs extension, which is introduced with the help of closed semibounded forms. Section 5.1 provides a self-contained introduction to closed semibounded forms and their representations via semibounded self-adjoint relations. Closely related is the discussion of the ordering for closed semibounded forms and for semibounded self-adjoint relations in Section 5.2; this section also contains a general monotonicity principle about monotone sequences of semibounded relations. The Friedrichs extension of a semibounded relation is defined and its central properties are studied in Section 5.3. Particular attention is paid to semibounded self-adjoint extensions which are transversal to the Friedrichs extension. Section 5.4 is devoted to special semibounded extensions, namely the Kre˘ın type extensions. In the nonnegative case these extensions include the wellknown Kre˘ın–von Neumann extension. The Friedrichs extension and the Kre˘ın type extensions act as extremal elements to describe the semibounded self-adjoint extensions with a given lower bound. In Section 5.5 there is a return to boundary triplets and Weyl functions for symmetric relations which are semibounded. Of special interest is the case where the self-adjoint extensions determined by the boundary triplet are semibounded and one of them coincides with the Friedrichs extension. In particular, this leads to a useful abstract version of the first Green formula. The notion of a boundary pair for semibounded relations is developed in Section 5.6. In conjunction with the above first Green formula, this notion serves as a link between boundary triplet methods and form methods when semibounded self-adjoint extensions are described; in a wider sense it establishes the connection with the Birman–Kre˘ın–Vishik method.

## **5.1 Closed semibounded forms and their representations**

A sesquilinear form t[·, ·] in a Hilbert space H with inner product (·, ·) is a mapping from <sup>D</sup> <sup>×</sup> <sup>D</sup> to <sup>C</sup>, where <sup>D</sup> is a linear subspace of <sup>H</sup>, such that <sup>t</sup>[·, ·] is linear in the first entry and anti-linear in the second entry. The domain dom t is defined by dom t = D. The form is said to be symmetric if t[ϕ, ψ] = t[ψ, ϕ] for all ϕ, ψ ∈ dom t. The corresponding quadratic form t[·] is defined by t[ϕ] = t[ϕ, ϕ], ϕ ∈ dom t. The polarization formula

$$\mathbf{t}[\varphi,\psi] = \frac{1}{4}\{\mathbf{t}[\varphi+\psi] - \mathbf{t}[\varphi-\psi]\} + \frac{i}{4}\{\mathbf{t}[\varphi+i\psi] - \mathbf{t}[\varphi-i\psi]\}\tag{5.1.1}$$

for ϕ, ψ ∈ dom t is easily checked. In the following the term sesquilinear will be dropped; whenever a form t[·, ·] is mentioned it is assumed to be sesquilinear and it will be denoted by t. For instance, the inner product (·, ·) is a form defined on all of H.

**Definition 5.1.1.** Let t<sup>1</sup> and t<sup>2</sup> be forms in H. Then the inclusion t<sup>2</sup> ⊂ t<sup>1</sup> means that

$$\operatorname{dom} \mathfrak{t}\_2 \subset \operatorname{dom} \mathfrak{t}\_1, \quad \mathfrak{t}\_2[\varphi] = \mathfrak{t}\_1[\varphi], \quad \varphi \in \operatorname{dom} \mathfrak{t}\_2. \tag{5.1.2}$$

If t<sup>2</sup> ⊂ t1, then t<sup>2</sup> is said to be a restriction of t<sup>1</sup> and t<sup>1</sup> is said to be an extension of t2. The sum t<sup>1</sup> + t<sup>2</sup> is defined by

$$(\mathfrak{t}\_1 + \mathfrak{t}\_2)[\varphi, \psi] = \mathfrak{t}\_1[\varphi, \psi] + \mathfrak{t}\_2[\varphi, \psi], \quad \varphi, \psi \in \text{dom}\left(\mathfrak{t}\_1 + \mathfrak{t}\_2\right),$$

where dom (t<sup>1</sup> + t2) = dom t<sup>1</sup> ∩ dom t2.

If <sup>α</sup> <sup>∈</sup> <sup>C</sup> the sum <sup>t</sup>[·, ·] + <sup>α</sup>(·, ·) is given by

$$\mathfrak{t}[\varphi,\psi] + \alpha(\varphi,\psi), \quad \varphi,\psi \in \text{dom}\,\mathfrak{t}.$$

This sum will be denoted by <sup>t</sup>+α. It is symmetric when <sup>t</sup> is symmetric and <sup>α</sup> <sup>∈</sup> <sup>R</sup>.

**Definition 5.1.2.** A symmetric form t in H is bounded from below if there exists a constant <sup>c</sup> <sup>∈</sup> <sup>R</sup> such that

$$\mathfrak{t}[\varphi] \ge c \|\varphi\|^2, \quad \varphi \in \text{dom}\,\mathfrak{t}.$$

This inequality will be denoted by t ≥ c. The lower bound m(t) is the largest of such numbers <sup>c</sup> <sup>∈</sup> <sup>R</sup>:

$$m(\mathbf{t}) = \inf \left\{ \frac{\mathbf{t}[\varphi]}{||\varphi||^2} : \varphi \in \text{dom}\,\mathbf{t}, \ \varphi \neq 0 \right\}.$$

If m(t) ≥ 0, then t is called nonnegative.

#### 5.1. Closed semibounded forms and their representations 283

In the following the terminology semibounded form is used for a symmetric form which is bounded from below. Note that t is a semibounded form if and only if for some, and hence for all <sup>α</sup> <sup>∈</sup> <sup>R</sup> the form <sup>t</sup> <sup>+</sup> <sup>α</sup> is semibounded. For a semibounded form t the lower bound m(t) will often be denoted by γ. Note that the form t − γ, γ = m(t), is nonnegative with lower bound 0. Therefore, one has the Cauchy–Schwarz inequality

$$|(\mathbf{t} - \gamma)[\varphi, \psi]| \le (\mathbf{t} - \gamma)[\varphi]^{\frac{1}{2}}(\mathbf{t} - \gamma)[\psi]^{\frac{1}{2}}, \quad \varphi, \psi \in \text{dom } \mathbf{t}, \tag{5.1.3}$$

and, hence the triangle inequality

$$[(\mathfrak{t}-\gamma)[\varphi+\psi]^{\frac{1}{2}} \le (\mathfrak{t}-\gamma)[\varphi]^{\frac{1}{2}} + (\mathfrak{t}-\gamma)[\psi]^{\frac{1}{2}}, \quad \varphi, \psi \in \text{dom}\,\mathfrak{t}.\tag{5.1.4}$$

It follows from (5.1.4) that

$$\left| (\mathbf{t} - \gamma)[\varphi]^{\frac{1}{2}} - (\mathbf{t} - \gamma)[\psi]^{\frac{1}{2}} \right| \le (\mathbf{t} - \gamma)[\varphi - \psi]^{\frac{1}{2}}, \quad \varphi, \psi \in \text{dom } \mathbf{t}.\tag{5.1.5}$$

The following continuity property is a simple consequence of (5.1.5). For a sequence (ϕn) in dom t and ϕ ∈ dom t one has

$$(\mathbf{t} - \gamma)[\varphi - \varphi\_n] \to 0 \quad \Rightarrow \quad (\mathbf{t} - \gamma)[\varphi\_n] \to (\mathbf{t} - \gamma)[\varphi]. \tag{5.1.6}$$

Let t be a semibounded form in H with lower bound γ and let a<γ. Equip the space dom t ⊂ H with the form

$$(\varphi, \psi)\_{\mathbf{t}-a} = \mathbf{t}[\varphi, \psi] - a(\varphi, \psi), \quad \varphi, \psi \in \text{dom } \mathbf{t}.\tag{5.1.7}$$

By rewriting this definition as

$$(\varphi, \psi)\_{t-a} = (t-\gamma)[\varphi, \psi] + (\gamma - a)(\varphi, \psi), \quad \varphi, \psi \in \text{dom } \mathfrak{t},\tag{5.1.8}$$

one sees that (·, ·)t−<sup>a</sup> is the sum of the semidefinite inner product t − γ and the inner product (γ − a)(·, ·). Hence, (·, ·)t−<sup>a</sup> is an inner product on dom t and the corresponding norm · <sup>t</sup>−<sup>a</sup> satisfies the inequality

$$\|\varphi\|\_{\mathfrak{t}-a}^2 \ge (\gamma - a) \|\varphi\|^2, \quad \varphi \in \text{dom } \mathfrak{t}.\tag{5.1.9}$$

When dom t is equipped with the inner product (·, ·)t−a, the resulting inner product space will be denoted by Ht−a. Note that if γ > 0, then obviously a = 0 is a natural choice in the above and the following arguments.

**Lemma 5.1.3.** Let t be a semibounded form in H with lower bound γ and let a<γ. Let (ϕn) be a sequence in dom t. Then (ϕn) is a Cauchy sequence in Ht−<sup>a</sup> if and only if

$$\text{At}[\varphi\_n - \varphi\_m] \to 0 \quad \text{and} \quad \|\varphi\_n - \varphi\_m\| \to 0. \tag{5.1.10}$$

Proof. According to (5.1.8), (ϕn) is a Cauchy sequence in H<sup>t</sup>−<sup>a</sup> if and only if

$$(\mathbf{t} - \gamma)[\varphi\_n - \varphi\_m] \to 0 \quad \text{and} \quad \left\| \varphi\_n - \varphi\_m \right\|^2 \to 0. \tag{5.1.11}$$

Now assume that (ϕn) is a Cauchy sequence in H<sup>t</sup>−<sup>a</sup>. Then it follows from (5.1.11) that (ϕn) is a Cauchy sequence in H and that

$$\mathbf{t}[\varphi\_n - \varphi\_m] = (\mathbf{t} - \gamma)[\varphi\_n - \varphi\_m] + \gamma ||\varphi\_n - \varphi\_m||^2 \to 0,$$

which shows (5.1.10). Conversely, if the sequence (ϕn) satisfies (5.1.10), then it follows from (5.1.7) that (ϕn) is a Cauchy sequence in <sup>H</sup>t−a. -

Let (ϕn) be a Cauchy sequence in Ht−a. Since H is a Hilbert space, it follows from Lemma 5.1.3 that there is an element ϕ ∈ H such that ϕ<sup>n</sup> → ϕ in H.

**Definition 5.1.4.** Let t be a semibounded form in H. A sequence (ϕn) in dom t is said to be t-convergent to an element ϕ ∈ H, not necessarily belonging to dom t, if

ϕ<sup>n</sup> → ϕ in H and t[ϕ<sup>n</sup> − ϕm] → 0, n, m → ∞.

This type of convergence will be denoted by ϕ<sup>n</sup> →<sup>t</sup> ϕ.

The following result is a direct consequence of Lemma 5.1.3 and the completeness of H.

**Corollary 5.1.5.** Let t be a semibounded form in H with lower bound γ and let a<γ. Then any Cauchy sequence in Ht−<sup>a</sup> is t-convergent. Conversely, any tconvergent sequence in dom t is a Cauchy sequence in Ht−a.

If the sequence (ϕn) in dom t is t-convergent, then by Definition 5.1.4

(t − γ)[ϕ<sup>n</sup> − ϕm] → 0 and ϕ<sup>n</sup> − ϕ<sup>m</sup> → 0.

Thus, one has the following result.

**Corollary 5.1.6.** Let t be a semibounded form in H with lower bound γ and let the sequence (ϕn) in dom t be t-convergent. Then the sequences ((t − γ)[ϕn]), (t[ϕn]), and ( ϕn ) converge and, consequently, they are bounded.

Proof. Since γ is the lower bound of t, one has (t − γ)[ϕ<sup>n</sup> − ϕm] → 0. Hence, (5.1.5) shows that ((t − γ)[ϕn]) is a Cauchy sequence. Then the same is true for the sequence (t[ϕn]) and it is also clear that ( ϕn ) is a Cauchy sequence. In particular, the sequences ((t − γ)[ϕn]), (t[ϕn]), and ( ϕn ) are bounded. -

The t-convergence is preserved when one takes a sum of sequences. To see this, let (ϕn) and (ψn) be sequences in dom t such that

$$
\varphi\_n \to\_\mathfrak{t} \varphi \quad \text{and} \quad \psi\_n \to\_\mathfrak{t} \psi
$$

for some ϕ, ψ ∈ H. Then clearly ϕ<sup>n</sup> + ψ<sup>n</sup> → ϕ + ψ in H and

$$\begin{aligned} \left[ (\mathfrak{t} - \gamma) [\varphi\_n + \psi\_n - (\varphi\_m + \psi\_m)] \right]^{\frac{1}{2}} \\ \leq (\mathfrak{t} - \gamma) [\varphi\_n - \varphi\_m]^{\frac{1}{2}} + (\mathfrak{t} - \gamma) [\psi\_n - \psi\_m]^{\frac{1}{2}}, \end{aligned}$$

by the triangle inequality in (5.1.4). Therefore,

$$
\varphi\_n \to\_\mathbf{t} \varphi \quad \text{and} \quad \psi\_n \to\_\mathbf{t} \psi \quad \Rightarrow \quad \varphi\_n + \psi\_n \to\_\mathbf{t} \varphi + \psi. \tag{5.1.12}
$$

As a consequence, one sees that

$$
\varphi\_n \to\_\mathfrak{t} \varphi \quad \text{and} \quad \psi\_n \to\_\mathfrak{t} \psi \quad \Rightarrow \quad \lim\_{n \to \infty} \mathfrak{t}[\varphi\_n, \psi\_n] \quad \text{exists.} \tag{5.1.13}
$$

This last implication follows easily from Corollary 5.1.6 and (5.1.12) by the polarization formula in (5.1.1).

Assume that ϕ<sup>n</sup> ∈ dom t and that ϕ<sup>n</sup> →<sup>t</sup> ϕ for some ϕ ∈ H. Now the question is when ϕ ∈ dom t and, if this is the case, when t[ϕ<sup>n</sup> − ϕ] → 0 ? This question gives rise to the notions of closed form and closable form in Definition 5.1.7 and Definition 5.1.11.

**Definition 5.1.7.** A semibounded form t in H is said to be closed if

ϕ<sup>n</sup> →<sup>t</sup> ϕ ⇒ ϕ ∈ dom t and t[ϕ<sup>n</sup> − ϕ] → 0.

The statement in (5.1.13) can now be made more precise when the form is closed.

**Lemma 5.1.8.** Let t be a closed semibounded form in H. Then

$$
\varphi\_n \to\_\mathbf{t} \varphi \quad \Rightarrow \quad \varphi \in \text{dom}\,\mathbf{t} \quad \text{and} \quad \mathbf{t}[\varphi\_n] \to \mathbf{t}[\varphi], \tag{5.1.14}
$$

and, consequently,

$$
\varphi\_n \to\_\mathbf{t} \varphi, \quad \psi\_n \to\_\mathbf{t} \psi \quad \Rightarrow \quad \varphi, \psi \in \text{dom } \mathbf{t} \quad \text{and} \quad \mathbf{t}[\varphi\_n, \psi\_n] \to \mathbf{t}[\varphi, \psi]. \tag{5.1.15}
$$

Proof. Assume that t is a closed semibounded form and ϕ<sup>n</sup> →<sup>t</sup> ϕ. Then ϕ ∈ dom t and t[ϕ<sup>n</sup> − ϕ] → 0, and since ϕ<sup>n</sup> → ϕ it follows that (t − γ)[ϕ<sup>n</sup> − ϕ] → 0. Hence, (t−γ)[ϕn] → (t−γ)[ϕ] by (5.1.6), and therefore t[ϕn] → t[ϕ]. This shows (5.1.14). Now polarization and (5.1.12) yield the assertion (5.1.15). -

**Lemma 5.1.9.** Let t be a semibounded form in H with lower bound γ and let a<γ. Then the following statements are equivalent:


In particular, t is closed if and only if t − x is closed for some, and hence for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>.

Proof. (i) ⇒ (ii) Assume that H<sup>t</sup>−<sup>a</sup> is complete. To show that t is closed, assume that ϕ<sup>n</sup> →<sup>t</sup> ϕ, so

$$
\varphi\_n \to \varphi \quad \text{and} \quad \mathfrak{t}[\varphi\_n - \varphi\_m] \to 0.
$$

In particular, this implies by Lemma 5.1.3 that ϕ<sup>n</sup> − ϕ<sup>m</sup> <sup>t</sup>−<sup>a</sup> → 0. Since H<sup>t</sup>−<sup>a</sup> is complete there is an element ϕ<sup>0</sup> ∈ H<sup>t</sup>−<sup>a</sup> = dom t such that ϕ<sup>n</sup> − ϕ<sup>0</sup> <sup>t</sup>−<sup>a</sup> → 0. Hence, by (5.1.9),

$$\|\varphi\_n - \varphi\_0\| \to 0.$$

Thus ϕ = ϕ<sup>0</sup> ∈ dom t. Therefore, ϕ<sup>n</sup> − ϕ <sup>t</sup>−<sup>a</sup> → 0 and by (5.1.7) one sees that t[ϕ<sup>n</sup> − ϕ] → 0. This proves that t is closed.

(ii) ⇐ (i) Assume that t is closed. To show that Ht−<sup>a</sup> is complete, let (ϕn) be a Cauchy sequence in Ht−a. This implies that ϕ<sup>n</sup> →<sup>t</sup> ϕ for some ϕ ∈ H; cf. Corollary 5.1.5. The closedness of t gives that ϕ ∈ dom t = Ht−<sup>a</sup> and t[ϕn−ϕ] → 0. By (5.1.7) this leads to ϕ<sup>n</sup> − ϕ <sup>t</sup>−<sup>a</sup> → 0, so that Ht−<sup>a</sup> is complete.

Since t − x is a semibounded form in H with lower bound γ − x, the last statement follows from <sup>H</sup>t−<sup>a</sup> <sup>=</sup> <sup>H</sup>t−x−(a−x) and the equivalence of (i) and (ii). -

Let t be a semibounded form in H with lower bound γ and let Ht−<sup>a</sup> be the corresponding inner product space with a<γ. In general t is not closed and hence Ht−<sup>a</sup> is not complete; cf. Lemma 5.1.9. If t<sup>1</sup> is a semibounded form with lower bound γ1, which extends the semibounded form t with lower bound γ, then

γ<sup>1</sup> ≤ γ.

Note that for a<γ<sup>1</sup> one has that t<sup>1</sup> is closed if and only if Ht1−<sup>a</sup> is a Hilbert space. The question is when such a closed extension t<sup>1</sup> exists and, if so, to determine the smallest such extension of t. In order to construct an extension of t, note that Lemma 5.1.8 suggests the following definition.

**Definition 5.1.10.** Let <sup>t</sup> be a semibounded form in <sup>H</sup>. The linear subspace dom<sup>t</sup> is the set of all ϕ ∈ H for which there exists a sequence (ϕn) in dom t such that ϕ<sup>n</sup> →<sup>t</sup> ϕ.

It is clear that dom<sup>t</sup> is an extension of dom <sup>t</sup>. To establish the linearity of domt, recall the property (5.1.12). According to (5.1.13), it would now be natural to define the form <sup>t</sup> on dom<sup>t</sup> as an extension of <sup>t</sup> by

$$\tilde{\mathfrak{t}}[\varphi,\psi] = \lim\_{n \to \infty} \mathfrak{t}[\varphi\_n,\psi\_n] \quad \text{for any} \quad \varphi\_n \to\_{\mathfrak{t}} \varphi, \quad \psi\_n \to\_{\mathfrak{t}} \psi,\tag{5.1.16}$$

as the limit on the right-hand side exists. However, in general the limit on the right-hand side of (5.1.16) depends on the choice of the sequences (ϕn) and (ψn), so that <sup>t</sup> may not be well defined as a form.

**Definition 5.1.11.** A semibounded form t in H is said to be closable if for any sequence (ϕn) in dom t

$$
\varphi\_n \to\_\mathbf{t} 0 \quad \Rightarrow \quad \mathbf{t}[\varphi\_n] \to 0.
$$

It will be shown that the extension procedure in (5.1.16) defines a form extension of <sup>t</sup> if <sup>t</sup> is closable. In fact, in this case the resulting form <sup>t</sup> is unique, being the smallest closed extension, and will be called the closure of t.

**Theorem 5.1.12.** Let t be a semibounded form in H with lower bound γ and let a<γ. Then t has a closed extension if and only if t is closable. In fact, if t is closable, then


and the inner product space <sup>H</sup>t−<sup>a</sup> is dense in the Hilbert space <sup>H</sup>˜t−a. Moreover, <sup>t</sup> is closable if and only <sup>t</sup><sup>−</sup> <sup>x</sup> is closable for some, and hence for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>, in which case

$$
\widetilde{\mathfrak{f}-x} = \widetilde{\mathfrak{f}} - x.\tag{5.1.17}
$$

Proof. (⇒) Let t<sup>1</sup> be a closed extension of t. In order to show that t is closable, assume that ϕ<sup>n</sup> →<sup>t</sup> 0. The form t<sup>1</sup> is an extension of t and this implies ϕ<sup>n</sup> →t<sup>1</sup> 0. Since t<sup>1</sup> is closed, it follows that

$$\mathfrak{t}[\varphi\_n] = \mathfrak{t}\_1[\varphi\_n] \to 0.$$

Hence, t is closable.

(⇐) Assume that <sup>t</sup> is closable. It will be shown that <sup>t</sup> in (5.1.16) is a well-defined form on domt. It is clear from (5.1.13) that the limit on the right-hand side of (5.1.16) exists. To verify that this limit depends only on the elements ϕ, ψ and not on the particular sequences (ϕn), (ψn), let (ϕ- n), (ψ- <sup>n</sup>) be other sequences such that ϕ- <sup>n</sup> →<sup>t</sup> ϕ and ψ- <sup>n</sup> →<sup>t</sup> ψ. Then

$$
\varphi\_n' - \varphi\_n \to\_\mathbf{t} 0 \quad \text{and} \quad \psi\_n' - \psi\_n \to\_\mathbf{t} 0;
$$

cf. (5.1.12). In particular, this gives

$$
\varphi\_n' - \varphi\_n \to 0 \quad \text{and} \quad \psi\_n' - \psi\_n \to 0,
$$

while the closability of t implies that

$$\mathfrak{t}[\varphi\_n' - \varphi\_n] \to 0 \quad \text{and} \quad \mathfrak{t}[\psi\_n' - \psi\_n] \to 0.$$

To see that the sequences t[ϕ- n, ψ- <sup>n</sup>] and t[ϕn, ψn] have the same limit, consider the inequalities

$$\begin{split} &|(\mathbf{t}-\gamma)[\varphi\_{n}^{\prime},\psi\_{n}^{\prime}]-(\mathbf{t}-\gamma)[\varphi\_{n},\psi\_{n}]| \\ &=|(\mathbf{t}-\gamma)[\varphi\_{n}^{\prime}-\varphi\_{n},\psi\_{n}^{\prime}]+(\mathbf{t}-\gamma)[\varphi\_{n},\psi\_{n}^{\prime}-\psi\_{n}]| \\ &\leq|(\mathbf{t}-\gamma)[\varphi\_{n}^{\prime}-\varphi\_{n},\psi\_{n}^{\prime}]|+|(\mathbf{t}-\gamma)[\varphi\_{n},\psi\_{n}^{\prime}-\psi\_{n}]| \\ &\leq(\mathbf{t}-\gamma)[\varphi\_{n}^{\prime}-\varphi\_{n}]^{\frac{1}{2}}(\mathbf{t}-\gamma)[\psi\_{n}^{\prime}]^{\frac{1}{2}}+(\mathbf{t}-\gamma)[\varphi\_{n}]^{\frac{1}{2}}(\mathbf{t}-\gamma)[\psi\_{n}^{\prime}-\psi\_{n}]^{\frac{1}{2}}. \end{split}$$

Clearly, due to the closability assumption, the terms

$$(\mathbf{t} - \gamma)[\varphi\_n' - \varphi\_n] \quad \text{and} \quad (\mathbf{t} - \gamma)[\psi\_n' - \psi\_n].$$

converge to 0 as n → ∞, while the terms

$$(\mathbf{t} - \gamma)[\psi\_n'] \quad \text{and} \quad (\mathbf{t} - \gamma)[\varphi\_n].$$

are bounded since ψ- <sup>n</sup> →<sup>t</sup> ψ and ϕ<sup>n</sup> →<sup>t</sup> ϕ, respectively; cf. Corollary 5.1.6. It follows that t[ϕ- n, ψ- <sup>n</sup>] <sup>−</sup> <sup>t</sup>[ϕn, ψn] <sup>→</sup> 0 and hence <sup>t</sup> in (5.1.16) is a well-defined form. Moreover, it is clear that <sup>t</sup> extends <sup>t</sup>: <sup>t</sup> <sup>⊂</sup> t.

The form <sup>t</sup> is semibounded. To see this, let <sup>ϕ</sup> <sup>∈</sup> domt. Then there exists a sequence (ϕn) in dom t such that ϕ<sup>n</sup> →<sup>t</sup> ϕ. In particular, ϕ<sup>n</sup> → ϕ and hence ϕn → ϕ . According to (5.1.16),

$$\mathfrak{t}[\varphi] = \lim\_{n \to \infty} \mathfrak{t}[\varphi\_n],$$

where t[ϕn] ≥ γ ϕn <sup>2</sup>. Therefore,

$$\widetilde{\mathfrak{t}}[\varphi] \ge \gamma \|\varphi\|^2, \quad \varphi \in \text{dom}\, \widetilde{\mathfrak{t}},$$

so that <sup>t</sup> is semibounded. Moreover, this argument shows that the lower bound of the extension is at least <sup>γ</sup>. Hence, <sup>t</sup> and <sup>t</sup> have the same lower bound.

The argument to show that <sup>t</sup> is closed, is based on the observation that for the extension t:

$$
\varphi\_n \to\_\mathbf{t} \varphi \quad \Rightarrow \quad \mathsf{f}[\varphi - \varphi\_n] \to 0. \tag{5.1.18}
$$

To see this, let ϕ<sup>n</sup> →<sup>t</sup> ϕ, that is ϕ<sup>n</sup> → ϕ and limm,n→∞ t[ϕ<sup>n</sup> − ϕm] = 0. Now fix <sup>n</sup> <sup>∈</sup> <sup>N</sup>, then <sup>ϕ</sup><sup>m</sup> <sup>→</sup><sup>t</sup> <sup>ϕ</sup> implies that

$$
\varphi\_m - \varphi\_n \to\_\mathfrak{t} \varphi - \varphi\_n \quad \text{as} \quad m \to \infty,
$$

so that, by definition,

$$\widetilde{\mathfrak{t}}[\varphi - \varphi\_n] = \lim\_{m \to \infty} \mathfrak{t}[\varphi\_m - \varphi\_n].$$

Now taking n → ∞ gives (5.1.18).

The following three steps will establish that <sup>t</sup> is closed or, equivalently, that <sup>H</sup>˜t−a, a<γ, is complete.

Step 1. <sup>H</sup>t−<sup>a</sup> is dense in <sup>H</sup>˜t−a. Indeed, let <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>˜t−<sup>a</sup> = domt. Then there is a sequence (ϕn) in Ht−<sup>a</sup> = dom t such that ϕ<sup>n</sup> →<sup>t</sup> ϕ. It follows from this assumption and (5.1.18) that

$$
\varphi\_n \to \varphi \quad \text{and} \quad \mathfrak{k}[\varphi - \varphi\_n] \to 0,
$$

in other words,

$$\|\varphi - \varphi\_n\|\_{\mathfrak{k}-a}^2 = \widetilde{\mathfrak{t}}[\varphi - \varphi\_n] - a\|\varphi - \varphi\_n\|^2 \to 0.$$

This shows that <sup>H</sup>t−<sup>a</sup> is dense in <sup>H</sup>˜t−a.

Step 2. Every Cauchy sequence in <sup>H</sup><sup>t</sup>−<sup>a</sup> is convergent in <sup>H</sup>˜t−<sup>a</sup>. To see this, let (ϕn) be a Cauchy sequence in H<sup>t</sup>−<sup>a</sup>. Then clearly there exists an element ϕ ∈ H such that ϕ<sup>n</sup> →<sup>t</sup> ϕ; cf. Corollary 5.1.5. Again by (5.1.18) it follows that

$$\|\varphi - \varphi\_n\|\_{\overline{\mathfrak{t}} - a} \to 0,$$

which now shows that the Cauchy sequence (ϕn) in <sup>H</sup><sup>t</sup>−<sup>a</sup> is convergent in <sup>H</sup>˜t−<sup>a</sup> to, in fact, <sup>ϕ</sup> <sup>∈</sup> dom<sup>t</sup> <sup>=</sup> <sup>H</sup>˜t−<sup>a</sup>.

Step 3. <sup>H</sup>˜t−<sup>a</sup> is a Hilbert space. To see this, let (χn) be a Cauchy sequence in <sup>H</sup>˜t−a. By Step 1, there is an element ϕ<sup>n</sup> ∈ Ht−<sup>a</sup> such that

$$\|\chi\_n - \varphi\_n\|\_{\mathfrak{k}-a} \le \frac{1}{n}.$$

Hence, the approximating sequence (ϕn) is a Cauchy sequence in Ht−a. By Step 2, (ϕn) converges in <sup>H</sup>˜t−a, which implies that the original sequence (χn) converges in <sup>H</sup>˜t−a.

Next it will be shown that <sup>t</sup> is the smallest closed extension of <sup>t</sup>. Assume that <sup>t</sup><sup>1</sup> is a closed extension of <sup>t</sup>: <sup>t</sup> <sup>⊂</sup> <sup>t</sup>1. Let <sup>ϕ</sup> <sup>∈</sup> domt; then there exists a sequence (ϕn) in dom t with ϕ<sup>n</sup> →<sup>t</sup> ϕ. Then also ϕ<sup>n</sup> →t<sup>1</sup> ϕ and hence ϕ ∈ dom t1. Therefore, dom<sup>t</sup> <sup>⊂</sup> dom <sup>t</sup>1. For every ϕ, ψ <sup>∈</sup> dom<sup>t</sup> it follows via corresponding sequences (ϕn), (ψn) in dom t with ϕ<sup>n</sup> →<sup>t</sup> ϕ and ψ<sup>n</sup> →<sup>t</sup> ψ that

$$\mathfrak{t}[\varphi,\psi] = \lim\_{n \to \infty} \mathfrak{t}[\varphi\_n,\psi\_n] = \lim\_{n \to \infty} \mathfrak{t}\_1[\varphi\_n,\psi\_n] = \mathfrak{t}\_1[\varphi,\psi],$$

where the first equality follows from (5.1.16), the second equality is valid as t<sup>1</sup> extends <sup>t</sup>, and the third equality follows from (5.1.15). Therefore, <sup>t</sup> <sup>⊂</sup> <sup>t</sup>1, and <sup>t</sup> is the smallest closed extension of t.

As to the last statement, observe that Definition 5.1.11 implies that t is closable if and only <sup>t</sup> <sup>−</sup> <sup>x</sup> is closable for some, and hence for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>. Finally, (5.1.17) follows from (5.1.16). -

Thus, a closed semibounded form <sup>t</sup><sup>1</sup> which extends <sup>t</sup> contains the closure t. The next corollary is a simple but useful description of the gap between <sup>t</sup><sup>1</sup> and t.

**Corollary 5.1.13.** Let the semibounded form t with lower bound γ be closable and let the closed form t<sup>1</sup> with lower bound γ<sup>1</sup> be an extension of t, so that γ<sup>1</sup> ≤ γ. Assume that a<γ1, then

$$\mathfrak{H}\_{\mathfrak{t}\_1-a} = \left\{ \varphi \in \mathfrak{H}\_{\mathfrak{t}\_1-a} : (\varphi, \psi)\_{\mathfrak{t}\_1-a} = 0, \psi \in \mathfrak{H}\_{\mathfrak{t}\_1-a} \right\} \oplus\_{\mathfrak{t}\_1-a} \mathfrak{H}\_{\mathfrak{t}-a}.$$

Let t be a closed semibounded form in H. Let D ⊂ dom t be a linear subspace and consider the restriction t<sup>D</sup> of t to D,

$$\mathfrak{t}\_{\mathfrak{D}}[\varphi,\psi] = \mathfrak{t}[\varphi,\psi], \quad \varphi,\psi \in \mathfrak{D}.$$

Since <sup>t</sup><sup>D</sup> is a restriction of a closed form, it is closable, see Theorem 5.1.12. Let t<sup>D</sup> be the closure of <sup>t</sup>D. Then by definition domt<sup>D</sup> is the set of all <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup> for which there exists a sequence (ϕn) in D with ϕ<sup>n</sup> →<sup>t</sup><sup>D</sup> ϕ, which means ϕ<sup>n</sup> →<sup>t</sup> ϕ. Since t is closed, one sees in particular that domt<sup>D</sup> <sup>⊂</sup> dom <sup>t</sup>. Moreover, one has

$$\widetilde{\mathfrak{t}}\_{\mathfrak{D}}[\varphi,\psi] = \lim\_{n \to \infty} \mathfrak{t}\_{\mathfrak{D}}[\varphi\_n,\psi\_n] = \lim\_{n \to \infty} \mathfrak{t}[\varphi\_n,\psi\_n] = \mathfrak{t}[\varphi,\psi]$$

for ϕ, ψ <sup>∈</sup> domtD, where the first equality is by definition, and the third equality follows from Lemma 5.1.8. Hence, the closure t<sup>D</sup> of <sup>t</sup><sup>D</sup> is the restriction of <sup>t</sup> to domtD. Since <sup>t</sup> is closed, it follows that <sup>ϕ</sup> <sup>∈</sup> domt<sup>D</sup> if and only if there is a sequence (ϕn) in D such that

$$
\varphi\_n \to \varphi \quad \text{and} \quad \mathfrak{t}[\varphi\_n - \varphi] \to 0.
$$

**Definition 5.1.14.** Let t be a closed semibounded form in H. A linear subspace D of dom <sup>t</sup> is said to be a core of <sup>t</sup> if the closure t<sup>D</sup> of the restriction <sup>t</sup><sup>D</sup> of <sup>t</sup> to <sup>D</sup> coincides with t.

Therefore, D ⊂ dom t is a core of t if and only if for every ϕ ∈ dom t there is a sequence (ϕn) in D such that

$$
\varphi\_n \to \varphi \quad \text{and} \quad \mathfrak{t}[\varphi\_n - \varphi] \to 0. \tag{5.1.19}
$$

This leads to the following corollary.

**Corollary 5.1.15.** Let t be a closed semibounded form in H with lower bound γ, let a<γ, and let D ⊂ dom t be a linear subspace. Then D is a core of t if and only if D is dense in the Hilbert space Ht−a.

Note that in the situation of Theorem 5.1.12 the original domain dom t is a core of the closure<sup>t</sup> of <sup>t</sup> (recall that the form<sup>t</sup> is closed). The following fact is useful: If t and s are closed semibounded forms in H which coincide on D ⊂ dom t∩dom s and D is a core of both t and s, then t = s.

Recall the definition of the sum of two forms in Definition 5.1.1 and observe that a sum of semibounded forms is also semibounded. The following result is concerned with additive perturbations of forms: it provides a sufficient condition so that the sum of a closed semibounded form and a symmetric form remains closed and semibounded. Sometimes this result is referred to as KLMN theorem, named after Kato, Lions, Lax, Milgram, and Nelson. For a typical application to Sturm–Liouville operators, see, e.g., Lemma 6.8.3.

**Theorem 5.1.16.** Assume that t is a closed semibounded form in H and let s be a symmetric form in H such that dom t ⊂ dom s and

$$|\mathfrak{s}[\varphi]| \le a \|\varphi\|^2 + b\mathfrak{t}[\varphi], \qquad \varphi \in \text{dom } \mathfrak{t},\tag{5.1.20}$$

holds for some a ≥ 0 and b ∈ [0, 1). Then the symmetric form

$$\mathfrak{t} + \mathfrak{s}, \qquad \text{dom}\,(\mathfrak{t} + \mathfrak{s}) = \text{dom}\,\mathfrak{t},$$

is closed and semibounded in H. Furthermore, if D is a core of t, then D is also a core of t + s.

Proof. Let γ be the lower bound of t. Fix some a- < γ and assume a- < 0. For all ϕ ∈ dom t, ϕ = 0, one obtains from (5.1.20) that

$$\mathfrak{a}[\varphi] \ge -a \|\varphi\|^2 - b\mathfrak{t}[\varphi],\tag{5.1.21}$$

and hence

$$(\mathfrak{t} + \mathfrak{s})[\varphi] \ge (1 - b)\mathfrak{t}[\varphi] - a\|\varphi\|^2 \\ > \left((1 - b)a' - a\right)\|\varphi\|^2 = c'\|\varphi\|^2,$$

where c- = (1 − b)a- − a < 0. This shows that t + s is semibounded from below. Furthermore, the estimate (5.1.21) also shows that

$$\begin{aligned} (1 - b) \|\varphi\|\_{\mathfrak{t} - a'}^2 &= (1 - b)\mathfrak{t}[\varphi] - (1 - b)a' \|\varphi\|^2 \\ &= \mathfrak{t}[\varphi] - b\mathfrak{t}[\varphi] - a \|\varphi\|^2 - \left( (1 - b)a' - a \right) \|\varphi\|^2 \\ &\le (\mathfrak{t} + \mathfrak{s})[\varphi] - \left( (1 - b)a' - a \right) \|\varphi\|^2 \\ &= \|\varphi\|\_{\mathfrak{t} + \mathfrak{s} - \mathfrak{e}'}^2 \end{aligned}$$

Using (5.1.20) one obtains

$$\begin{aligned} \|\varphi\|\_{\mathfrak{t}+\mathfrak{a}-c'}^2 &= \mathfrak{t}[\varphi] + \mathfrak{s}[\varphi] - c' \|\varphi\|^2 \\ &\le (1+b)\mathfrak{t}[\varphi] - (c'-a) \|\varphi\|^2 \\ &\le b' \|\varphi\|\_{\mathfrak{t}-a'}^2, \end{aligned}$$

where b- = max {(1 + b),(c- − a)/a- }. Therefore, the above estimates imply that the norms · 2 t−a and · 2 t+s−c are equivalent on dom t = dom (s + t). Since t is closed, Ht−a is a Hilbert space and hence Ht+s−c is a Hilbert space, that is, the form t + s is closed; cf. Lemma 5.1.9. The assertion about the core D is clear from Corollary 5.1.15. -

Semibounded relations in a Hilbert space generate closable semibounded forms as will be shown in the following lemma. Note that if a relation is semibounded, then so is its closure, with the same lower bound; this follows directly from Definition 1.4.5. Furthermore, the closure will generate the same form. The particular situation of semibounded self-adjoint relations will be considered in detail in Theorem 5.1.18 and Proposition 5.1.19.

**Lemma 5.1.17.** Let S be a semibounded relation in H with lower bound m(S). Then the form t<sup>S</sup> given by

$$\mathfrak{t}\_S[\varphi,\psi] = (\varphi',\psi), \quad \{\varphi,\varphi'\}, \{\psi,\psi'\} \in S,\tag{5.1.22}$$

with dom t<sup>S</sup> = dom S, is well defined, semibounded with the lower bound m(S), and closable. The closure t<sup>S</sup> of <sup>t</sup><sup>S</sup> is a semibounded closed form whose lower bound is equal to m(S), and

$$\text{dom}\,\widetilde{\mathfrak{t}}\_S \subset \overline{\text{dom}}\,S.\tag{5.1.23}$$

Moreover, dom <sup>S</sup> = dom <sup>t</sup><sup>S</sup> is a core of tS. Furthermore, with the closure <sup>S</sup> of <sup>S</sup> one has

$$
\widetilde{\mathbf{t}}\_S = \widetilde{\mathbf{t}}\_{\widetilde{\mathbf{S}}}.\tag{5.1.24}
$$

Proof. As a semibounded relation S is automatically symmetric, it follows that mul S ⊂ mul S<sup>∗</sup> = (dom S)⊥, and hence

$$(\varphi', \psi) = (\varphi'', \psi), \quad \{\varphi, \varphi'\}, \{\varphi, \varphi''\}, \{\psi, \psi'\} \in S.$$

Thus, the form in (5.1.22) is well defined with dom t<sup>S</sup> = dom S. By definition t<sup>S</sup> is semibounded and its lower bound is clearly equal to γ = m(S).

In order to show that t<sup>S</sup> is closable, let ϕ<sup>n</sup> →t<sup>S</sup> 0. Then, equivalently,

ϕ<sup>n</sup> → 0 and (t<sup>S</sup> − γ)[ϕ<sup>n</sup> − ϕm] → 0.

It suffices to verify that (t<sup>S</sup> − γ)[ϕn] → 0. Note that there exists ϕ- <sup>n</sup> ∈ H such that {ϕn, ϕ- <sup>n</sup>} ∈ S. Then

$$(\mathbf{t}\_S - \gamma)[\varphi\_n] = (\mathbf{t}\_S - \gamma)[\varphi\_n, \varphi\_n] = (\mathbf{t}\_S - \gamma)[\varphi\_n, \varphi\_n - \varphi\_m] + (\mathbf{t}\_S - \gamma)[\varphi\_n, \varphi\_m],$$

and it follows with the help of the Cauchy–Schwarz inequality (5.1.3) for the nonnegative form (t<sup>S</sup> − γ) and (5.1.22) that

$$\begin{split} |(\mathsf{f}\_{S} - \gamma)[\varphi\_{n}]| &\leq |(\mathsf{f}\_{S} - \gamma)[\varphi\_{n}, \varphi\_{n} - \varphi\_{m}]| + |(\mathsf{f}\_{S} - \gamma)[\varphi\_{n}, \varphi\_{m}]| \\ &\leq (\mathsf{t}s - \gamma)[\varphi\_{n}]^{\frac{1}{2}}(\mathsf{t}s - \gamma)[\varphi\_{n} - \varphi\_{m}]^{\frac{1}{2}} + |(\varphi\_{n}' - \gamma\varphi\_{n}, \varphi\_{m})|. \end{split}$$

By Corollary 5.1.6, the sequence ((t<sup>S</sup> <sup>−</sup>γ)[ϕn]) is bounded by <sup>M</sup><sup>2</sup> for some M > 0. Moreover, for every ε > 0 there exists <sup>N</sup> <sup>∈</sup> <sup>N</sup> such that (t<sup>S</sup> <sup>−</sup> <sup>γ</sup>)[ϕ<sup>n</sup> <sup>−</sup> <sup>ϕ</sup>m] <sup>≤</sup> <sup>ε</sup><sup>2</sup> for n, m ≥ N. Therefore,

$$|(\mathfrak{t}\_S - \gamma)[\varphi\_n]| \le M\varepsilon + |(\varphi\_n' - \gamma \varphi\_n, \varphi\_m)|, \qquad n, m \ge N.$$

Fix n ≥ N and let m → ∞. From |(ϕ- <sup>n</sup> − γϕn, ϕm)|≤ ϕ- <sup>n</sup> − γϕ<sup>n</sup> ϕm and ϕm → 0 it follows that |(t<sup>S</sup> − γ)[ϕn]| ≤ Mε for n ≥ N. This shows that (t<sup>S</sup> − γ)[ϕn] → 0 as n → ∞, and hence t<sup>S</sup> is closable.

By Theorem 5.1.12, it is clear that the closuret<sup>S</sup> of <sup>t</sup><sup>S</sup> is a semibounded closed form whose lower bound is equal to <sup>m</sup>(S). It also follows from the definition of <sup>t</sup> that the inclusion (5.1.23) holds. Furthermore, dom <sup>S</sup> = dom <sup>t</sup><sup>S</sup> is a core of tS.

It remains to show (5.1.24). The inclusion t<sup>S</sup> <sup>⊂</sup> t<sup>S</sup> is clear. For the opposite inclusion, let ϕ ∈ dom t<sup>S</sup> = dom S and ϕ- ∈ H such that {ϕ, ϕ- } ∈ S, in which case

$$\mathfrak{t}\_{\overline{\mathcal{S}}}[\varphi,\varphi] = (\varphi',\varphi).$$

Then there exists a sequence ({ϕn, ϕ- <sup>n</sup>}) in S with ϕ<sup>n</sup> → ϕ and ϕ- <sup>n</sup> → ϕ- , and hence

$$\text{tr}\_{\mathbf{S}}[\varphi\_n - \varphi\_m] = (\varphi\_n' - \varphi\_m', \varphi\_n - \varphi\_m) \to 0.$$

Therefore, <sup>ϕ</sup><sup>n</sup> <sup>→</sup><sup>t</sup><sup>S</sup> <sup>ϕ</sup>, so that <sup>ϕ</sup> <sup>∈</sup> domtS. Moreover,

$$\mathfrak{tr}\_{\mathfrak{S}}[\varphi,\varphi] = (\varphi',\varphi) = \lim\_{n \to \infty} (\varphi'\_n,\varphi\_n) = \lim\_{n \to \infty} \mathfrak{t}\_{\mathbb{S}}[\varphi\_n,\varphi\_n] = \dot{\mathfrak{t}}\_{\mathbb{S}}[\varphi,\varphi],$$

where in the last equality the definition of the closure in Theorem 5.1.12 was used. This implies <sup>t</sup><sup>S</sup> <sup>⊂</sup> t<sup>S</sup> and hence t<sup>S</sup> <sup>⊂</sup> tS. Therefore, t<sup>S</sup> <sup>=</sup> tS. -

In the next theorem it is shown that every closed semibounded form can be represented by a semibounded self-adjoint relation.

**Theorem 5.1.18** (First representation theorem)**.** Assume that t is a closed semibounded form in H. Then there exists a semibounded self-adjoint relation H in H such that the following statements hold:

(i) dom H ⊂ dom t and

$$\mathbf{t}[\varphi, \psi] = (\varphi', \psi) \tag{5.1.25}$$

for every {ϕ, ϕ- } ∈ H and ψ ∈ dom t;


$$\mathbf{t}[\varphi, \psi] = (\varphi', \psi) \tag{5.1.26}$$

for every ψ in a core of t, then {ϕ, ϕ- } ∈ H;

(iv) mul H = (dom t)<sup>⊥</sup> and t[ϕ, ψ]=(Hop ϕ, ψ) (5.1.27)

for every ϕ ∈ dom H and ψ ∈ dom t.

The semibounded self-adjoint relation H is uniquely determined by (i). The closed form t and the corresponding semibounded self-adjoint relation H have the same lower bound: <sup>m</sup>(t) = <sup>m</sup>(H). Moreover, for each <sup>x</sup> <sup>∈</sup> <sup>R</sup> the closed semibounded form t − x corresponds to the semibounded self-adjoint relation H − x.

Proof. (i) Let m(t) = γ and choose a<γ. Then the assumption that t is closed is equivalent to the inner product space Ht−<sup>a</sup> being complete, where Ht−<sup>a</sup> = dom t is equipped with the inner product of (·, ·)t−<sup>a</sup> as in (5.1.7)–(5.1.8); cf. Lemma 5.1.9. For any fixed ω ∈ H consider the linear functional

$$\psi \mapsto (\psi, \omega)$$

defined for all ψ ∈ Ht−<sup>a</sup> = dom t ⊂ H. It follows from (5.1.9) that

$$|\langle \psi, \omega \rangle| \le \|\psi\| \|\omega\| \le \left(\frac{1}{\sqrt{\gamma - a}} \|\omega\|\right) \|\psi\|\_{\mathfrak{t}-a}, \quad \psi \in \mathfrak{H}\_{\mathfrak{t}-a} \omega$$

Hence, the mapping <sup>ψ</sup> → (ψ, ω) from <sup>H</sup><sup>t</sup>−<sup>a</sup> to <sup>C</sup> is bounded with bound at most ω / <sup>√</sup><sup>γ</sup> <sup>−</sup> <sup>a</sup>. Therefore, by the Riesz representation theorem, there exists an element <sup>ω</sup> in <sup>H</sup><sup>t</sup>−<sup>a</sup> such that for all <sup>ψ</sup> <sup>∈</sup> <sup>H</sup><sup>t</sup>−<sup>a</sup>:

$$(\psi, \omega) = (\psi, \widehat{\omega})\_{\mathfrak{t}-a}, \quad \|\widehat{\omega}\|\_{\mathfrak{t}-a} \le \frac{1}{\sqrt{\gamma - a}} \|\omega\|.$$

Taking conjugates for convenience, it follows from the definition (5.1.7) of (·, ·)<sup>t</sup>−<sup>a</sup> that

$$(\omega, \psi) = (\widehat{\omega}, \psi)\_{\mathfrak{t}-a} = \mathfrak{t}[\widehat{\omega}, \psi] - a(\widehat{\omega}, \psi), \tag{5.1.28}$$

or, in other words,

$$\mathfrak{A}[\widehat{\omega}, \psi] = (\omega + a\widehat{\omega}, \psi), \quad \psi \in \mathfrak{H}\_{\mathfrak{t}-a}.\tag{5.1.29}$$

Note that the linear mapping <sup>A</sup> from <sup>H</sup> to <sup>H</sup>t−<sup>a</sup> defined by Aω <sup>=</sup> <sup>ω</sup> satisfies

$$\left|\sqrt{\gamma - a}\right| \|A\omega\| \le \|A\omega\|\_{\mathfrak{t}-a} \le \frac{1}{\sqrt{\gamma - a}} \|\omega\|;$$

where in the first inequality (5.1.9) was used. In other words, if A is interpreted as a mapping from H to H, then

$$\|A\omega\| \le \frac{1}{\gamma - a} \|\omega\|.$$

By means of A define the linear relation H in H by

$$H = \left\{ \left\{ A\omega, \omega + aA\omega \right\} \colon \omega \in \mathfrak{H} \right\},$$

so that

$$A = (H - a)^{-1}.$$

One sees that dom H = ran A ⊂ dom t and mul H = ker A. Moreover, every element {ϕ, ϕ- } ∈ H can be written as {ϕ, ϕ- } <sup>=</sup> {ω, ω <sup>+</sup> aω} for some <sup>ω</sup>, so that by the identity (5.1.29) one obtains

$$\mathfrak{st}[\varphi,\psi] = (\varphi',\psi), \quad \{\varphi,\varphi'\} \in H, \qquad \psi \in \mathfrak{H}\_{\mathfrak{t}-a} = \text{dom } \mathfrak{t}.\tag{5.1.30}$$

It follows from (5.1.30) with ψ = ϕ that H is a semibounded relation with lower bound

$$m(H) \ge m(\mathbf{t}) = \gamma. \tag{5.1.31}$$

It is clear that H is symmetric. According to the definition of H one sees that ran (H − a) = H, which, since a<γ, implies that H is self-adjoint; cf. Proposition 1.5.6. Thus, (i) has been proved.

(ii) The statement that dom H is a core of t is equivalent to the statement that dom H is dense in the Hilbert space Ht−a. To verify denseness, assume that the element ψ ∈ Ht−<sup>a</sup> is orthogonal to dom H = ran A, i.e.,

$$0 = (A\omega, \psi)\_{\mathfrak{t}-a} = (\widehat{\omega}, \psi)\_{\mathfrak{t}-a} = (\omega, \psi),$$

for all ω ∈ H; cf. (5.1.28). This leads to ψ = 0. Hence, dom H = ran A is dense in the Hilbert space Ht−a, and the assertion follows from Corollary 5.1.15.

(iii) Let ϕ ∈ dom t and ϕ- ∈ H satisfy (5.1.26) for every ψ in a core D of the form t. Then (5.1.26) holds for all ψ ∈ dom t. To see this, let ψ ∈ dom t. Then there exists a sequence (ψn) in D such that ψ<sup>n</sup> →<sup>t</sup> ψ, which implies that t[ψ<sup>n</sup> − ψ] → 0. Since ψ<sup>n</sup> ∈ D, the assumption yields

$$\mathfrak{t}[\varphi,\psi] = \lim\_{n \to \infty} \mathfrak{t}[\varphi,\psi\_n] = \lim\_{n \to \infty} (\varphi',\psi\_n) = (\varphi',\psi), \quad \psi \in \text{dom } \mathfrak{t},$$

so that (5.1.26) holds for all ψ ∈ dom t. Due to the symmetry of t this result may also be written as

$$\mathbf{t}[\psi,\varphi] = (\psi,\varphi'), \qquad \psi \in \text{dom } \mathbf{t}. \tag{5.1.32}$$

Now let {ψ, ψ- } ∈ H. Then ψ ∈ dom H ⊂ dom t and, by (i),

$$\mathfrak{t}[\psi,\varphi] = (\psi',\varphi),\tag{5.1.33}$$

because ϕ ∈ dom t. Comparing (5.1.32) and (5.1.33) gives

$$(\psi, \varphi') = (\psi', \varphi) \quad \text{for all} \quad \{\psi, \psi'\} \in H,$$

which leads to {ϕ, ϕ- } ∈ H<sup>∗</sup> = H. This proves (iii).

(iv) It follows from (i) that if {0, ϕ- } ∈ H, then (ϕ- , ψ) = 0 for all ψ ∈ dom t, and hence mul H ⊂ (dom t)⊥. Conversely, as dom H ⊂ dom t by (i) and H is self-adjoint, (dom t)<sup>⊥</sup> ⊂ (dom H)<sup>⊥</sup> = mul H. This shows that mul H = (dom t)⊥.

To see (5.1.27), let {ϕ, ϕ- } ∈ H. Then ϕ- = Hop ϕ + χ, where χ ∈ mul H. Hence, from (5.1.25) one obtains

$$\mathfrak{t}[\varphi,\psi] = (\varphi',\psi) = (H\_{\mathrm{op}}\,\varphi+\chi,\psi) = (H\_{\mathrm{op}}\,\varphi,\psi),$$

which gives (5.1.27). This completes the proof of (iv).

To show uniqueness, assume that H is a semibounded self-adjoint relation in H such that dom H-⊂ dom t and

$$\mathfrak{t}[\varphi,\psi] = (\varphi',\psi)$$

for every {ϕ, ϕ- } ∈ H and ψ ∈ dom t. Then, in particular, one concludes that ϕ ∈ dom H- ⊂ dom t and ϕ- ∈ H, so that by (iii) it follows that {ϕ, ϕ- } ∈ H. Hence, H- ⊂ H and one obtains equality as Hand H are both self-adjoint.

Recall that it has been shown in the proof of (i) that m(H) ≥ m(t); cf. (5.1.31). The equality follows from the fact that dom H is a core of t; see (ii). In fact, if ϕ ∈ dom t, then there exists a sequence (ϕn) in dom H such that ϕ<sup>n</sup> →<sup>t</sup> ϕ. Therefore, if ϕ = 0, then

$$\frac{\mathfrak{t}[\varphi]}{||\varphi||^2} = \lim\_{n \to \infty} \frac{\mathfrak{t}[\varphi\_n]}{||\varphi\_n||^2} = \lim\_{n \to \infty} \frac{(H\_{\text{op }}\varphi\_n, \varphi\_n)}{||\varphi\_n||^2} \ge m(H).$$

Since this inequality holds for every nontrivial ϕ ∈ dom t one concludes that

$$m(\mathbf{t}) = \inf \left\{ \frac{\mathbf{t}[\varphi]}{||\varphi||^2} : \varphi \in \text{dom } \mathbf{t}, \,\varphi \neq 0 \right\} \ge m(H),$$

and so m(t) = m(H).

Finally, note that for <sup>x</sup> <sup>∈</sup> <sup>R</sup> the form <sup>t</sup> <sup>−</sup> <sup>x</sup> is semibounded and closed, and the relation H − x is semibounded and self-adjoint. For {ϕ, ϕ- } ∈ H,

$$(\mathbf{t} - x)[\varphi, \psi] = (\varphi', \psi) - x(\varphi, \psi) = (\varphi' - x\varphi, \psi) \tag{5.1.34}$$

for all ψ ∈ dom t = dom t−x. Observe from (iii) that {ϕ, ϕ- −xϕ} ∈ H −x belongs to the semibounded self-adjoint relation corresponding to t − x. As H − x is selfadjoint and contained in the semibounded self-adjoint relation corresponding to <sup>t</sup>−<sup>x</sup> both coincide, i.e., <sup>H</sup>−<sup>x</sup> corresponds to the closed semibounded form <sup>t</sup>−x. -

The representation result in Theorem 5.1.18 gives assertions concerning the semibounded self-adjoint relation associated with a given semibounded form. In fact, every semibounded self-adjoint relation appears in such a context, as is shown in the following proposition; cf. Lemma 5.1.17.

**Proposition 5.1.19.** Let A be a semibounded self-adjoint relation in H. Then the semibounded, closable form defined by

$$\mathfrak{t}\_A[\varphi,\psi] = (\varphi',\psi), \quad \{\varphi,\varphi'\}, \{\psi,\psi'\} \in A,$$

has a closure whose corresponding semibounded self-adjoint relation is given by A.

Proof. Since A is semibounded and self-adjoint, Lemma 5.1.17 shows that the form t<sup>A</sup> is well defined, semibounded, and closable. Moreover, dom A is a core of its closure t. Let <sup>H</sup> be the semibounded self-adjoint relation corresponding to t. Since <sup>t</sup> is an extension of <sup>t</sup>, one has

$$\mathfrak{t}[\varphi,\psi] = \mathfrak{t}[\varphi,\psi] = (\varphi',\psi), \quad \{\varphi,\varphi'\}, \{\psi,\psi'\} \in A.$$

Therefore, Theorem 5.1.18 (iii) implies that {ϕ, ϕ- } ∈ H, since dom A is a core of t. Consequently, <sup>A</sup> <sup>⊂</sup> <sup>H</sup> and since <sup>A</sup> and <sup>H</sup> are both self-adjoint, one concludes A = H. -

The following observation, based on Theorem 5.1.18 and Proposition 5.1.19, is included for completeness.

**Corollary 5.1.20.** There is a one-to-one correspondence between all closed semibounded forms and all closed semibounded self-adjoint relations via the identity (5.1.25) or, equivalently, via the identity (5.1.27) in the first representation theorem.

The correspondence between closed semibounded forms and semibounded self-adjoint relations in Theorem 5.1.18 can be illuminated further in the context of nonnegative forms and nonnegative self-adjoint relations. As a preparation, observe that a typical way to define forms is via linear operators.

**Lemma 5.1.21.** Let T be a linear operator from a Hilbert space H to a Hilbert space K and define a nonnegative form t in H by

$$\mathfrak{tt}[\varphi, \psi] = (T\varphi, T\psi), \quad \varphi, \psi \in \text{dom } \mathfrak{t} = \text{dom}\, T.$$

Then

t is a closable form ⇔ T is a closable operator,

and in this case the closure of t is given by

$$\overline{\mathfrak{t}}[\varphi,\psi] = (\overline{T}\varphi,\overline{T}\psi), \quad \varphi,\psi \in \text{dom}\,\overline{\mathfrak{t}} = \text{dom}\,\overline{T}.\tag{5.1.35}$$

Proof. (⇒) Assume that t is closable. Let (ϕn) be a sequence in dom T such that ϕ<sup>n</sup> → 0 in H and T ϕ<sup>n</sup> → ψ in K. Then

$$\mathfrak{t}[\varphi\_n - \varphi\_m] = \|T(\varphi\_n - \varphi\_m)\|^2 \to 0,$$

which implies that ϕ<sup>n</sup> →<sup>t</sup> 0. Since t is closable, one obtains

$$\|T\varphi\_n\|^2 = \mathfrak{t}[\varphi\_n] \to 0,$$

so that T ϕ<sup>n</sup> → 0. It follows that T is closable.

(⇐) Assume that T is closable. Let (ϕn) in dom t with ϕ<sup>n</sup> →<sup>t</sup> 0. Then ϕ<sup>n</sup> → 0 in H and (T ϕn) is a Cauchy sequence in K. Hence, T ϕ<sup>n</sup> → ψ for some ψ ∈ K and since T is closable one sees that ψ = 0. Therefore, t[ϕn] = T ϕn <sup>2</sup> <sup>→</sup> 0. It follows that t is closable.

Finally, assume that t or, equivalently, T is closable. Then one has

$$\text{dom}\,\widetilde{\mathfrak{t}} = \text{dom}\,\overline{T}.\tag{5.1.36}$$

Indeed, for the inclusion (⊂) in (5.1.36) consider <sup>ϕ</sup> <sup>∈</sup> domt. Then there exists a sequence (ϕn) in dom t with ϕ<sup>n</sup> →<sup>t</sup> ϕ; cf. Theorem 5.1.12. Hence, ϕ<sup>n</sup> → ϕ in H and (T ϕn) is a Cauchy sequence in K. Thus, there exists ϕ- ∈ K such that T ϕ<sup>n</sup> → ϕ- . Since T is closable it follows that ϕ ∈ dom T and ϕ- = T ϕ. Moreover, by Theorem 5.1.12 and (5.1.16) it follows that

$$\widetilde{\mathfrak{t}}[\varphi,\varphi] = \lim\_{n \to \infty} \mathfrak{t}[\varphi\_n,\varphi\_n] = \lim\_{n \to \infty} (T\varphi\_n, T\varphi\_n) = (\overline{T}\varphi, \overline{T}\varphi),$$

and polarization leads to the identity in (5.1.35). For the inclusion (⊃) in (5.1.36) let ϕ ∈ dom T. Then T ϕ = ϕ for some ϕ- ∈ K, and there exists a sequence (ϕn) in dom T for which ϕ<sup>n</sup> → ϕ while T ϕ<sup>n</sup> → ϕ- . In particular, it follows that ϕ<sup>n</sup> →<sup>t</sup> ϕ. Therefore, <sup>ϕ</sup> <sup>∈</sup> domt; this proves (5.1.36). -

The following result specializes the first representation theorem to closed nonnegative forms as in Lemma 5.1.21. For a class of closed nonnegative forms it identifies the associated self-adjoint relations. Recall that for a closed operator R a linear subspace <sup>D</sup> <sup>⊂</sup> dom <sup>R</sup> is a core if the closure of the restriction <sup>R</sup> <sup>D</sup> of <sup>R</sup> to D coincides with R; cf. Lemma 1.5.10.

**Proposition 5.1.22.** Let T be a closed relation from a Hilbert space H to a Hilbert space K and let Top = P T be the closed orthogonal operator part of T, where P is the orthogonal projection in K onto (mul T)⊥; cf. Theorem 1.3.15. Then the rule

$$\mathbf{t}[\varphi, \psi] = (T\_{\text{op}}\varphi, T\_{\text{op}}\psi), \quad \varphi, \psi \in \text{dom}\,\mathbf{t} = \text{dom}\,T\_{\text{op}} = \text{dom}\,T,\tag{5.1.37}$$

defines a closed nonnegative form t in H. The nonnegative self-adjoint relation corresponding to the form t is given by T <sup>∗</sup>T. Moreover, a subset of dom t = dom T is a core of the form t if and only if it is a core of the operator Top .

Proof. Since the operator Top is closed, the nonnegative form t in (5.1.37) is closed, with dom t = dom Top = dom T; cf. Lemma 5.1.21. Recall that T <sup>∗</sup>T is a nonnegative self-adjoint relation in H; cf. Lemma 1.5.8. Assume that ϕ ∈ dom T <sup>∗</sup>T and ψ ∈ dom T. Let ϕ- ∈ H be any element such that {ϕ, ϕ- } ∈ T <sup>∗</sup>T. This implies that {ϕ, η} ∈ T and {η, ϕ- } ∈ T <sup>∗</sup> for some η ∈ K. Clearly, η = Top ϕ + ω for some ω ∈ mul T. Since {Top ϕ + ω,ϕ- } ∈ T <sup>∗</sup> and {ψ, Top ψ} ∈ T, one sees that

$$0 = (\varphi', \psi) - (T\_{\text{op }}\varphi + \omega, T\_{\text{op }}\psi) = (\varphi', \psi) - (T\_{\text{op }}\varphi, T\_{\text{op }}\psi), \quad \psi \in \text{dom } T,$$

i.e.,

$$\mathfrak{t}[\varphi,\psi] = (\varphi',\psi), \quad \{\varphi,\varphi'\} \in T^\*T, \quad \psi \in \text{dom}\,T.$$

Let H be the nonnegative self-adjoint relation associated with t via Theorem 5.1.18. According to (iii) of Theorem 5.1.18, the nonnegative self-adjoint relation T <sup>∗</sup>T satisfies T <sup>∗</sup>T ⊂ H, which gives T <sup>∗</sup>T = H.

Now let D ⊂ dom t = dom T be a linear subset. Then D is a core of t if and only if for every ϕ ∈ dom t = dom T there is a sequence (ϕn) in D such that

$$
\varphi\_n \to \varphi \quad \text{and} \quad \mathfrak{t}[\varphi\_n - \varphi] \to 0;
$$

cf. (5.1.19). In view of the definition of t, this condition reads as

$$
\varphi\_n \to \varphi \quad \text{and} \quad T\_{\text{op}}\varphi\_n \to T\_{\text{op}}\varphi,
$$

in other words D is a core of Top . -

The so-called second representation theorem may be seen as a corollary of Theorem 5.1.18 and Proposition 5.1.22.

**Theorem 5.1.23** (Second representation theorem)**.** Assume that the closed semibounded form t and the semibounded self-adjoint relation H are connected as in Theorem 5.1.18, so that m(H) = m(t) = γ, and let x ≤ γ. Then

$$\text{dom}\,\mathbf{t} = \text{dom}\,(H - x)^{\frac{1}{2}}$$

and the form t is represented by

$$\mathfrak{a}[\varphi,\psi] = \left( (H\_{\mathrm{op}} - x)^{\frac{1}{2}} \varphi, (H\_{\mathrm{op}} - x)^{\frac{1}{2}} \psi \right) + x(\varphi, \psi), \quad \varphi, \psi \in \mathrm{dom}\, \mathfrak{t}.$$

Moreover, a subset of dom t = dom (H − x) 1 <sup>2</sup> is a core of the form t if and only if it is a core of the operator (Hop − x) 1 2 .

Proof. For x ≤ γ define the form s<sup>x</sup> by

$$\mathfrak{s}\_x[\varphi,\psi] = \left( (H\_{\mathrm{op}} - x)^{\frac{1}{2}} \varphi, (H\_{\mathrm{op}} - x)^{\frac{1}{2}} \psi \right), \quad \varphi, \psi \in \mathrm{dom}\,\mathfrak{s}\_x,$$

on the domain dom <sup>s</sup><sup>x</sup> = dom (Hop <sup>−</sup> <sup>x</sup>)1/<sup>2</sup> = dom (<sup>H</sup> <sup>−</sup> <sup>x</sup>)1/2. By Proposition 5.1.22, the form s<sup>x</sup> is closed and nonnegative. The corresponding nonnegative self-adjoint relation is given by

$$\left((H-x)^{\frac{1}{2}}\right)^{\*}(H-x)^{\frac{1}{2}}=H-x,$$

and hence sx[ϕ, ψ]=(ϕ- , ψ) holds for all {ϕ, ϕ- } ∈ H−x and ψ ∈ dom sx. It follows as in the proof of Theorem 5.1.18 (see (5.1.34)) that the closed semibounded form

$$(\mathfrak{a}\_x + x)[\varphi, \psi] = \left( (H\_{\mathrm{op}} - x)^{\frac{1}{2}} \varphi, (H\_{\mathrm{op}} - x)^{\frac{1}{2}} \psi \right) + x(\varphi, \psi), \quad \varphi, \psi \in \text{dom}\, \mathfrak{a}\_x,$$

is represented by the semibounded self-adjoint relation H. Furthermore,

$$(\mathfrak{a}\_x + x)[\varphi, \psi] = ((H\_{\text{op}} - x)\varphi, \psi) + x(\varphi, \psi) = (H\_{\text{op}}\varphi, \psi)$$

for all ϕ, ψ ∈ dom H, and hence the restrictions of the form s<sup>x</sup> +x and of the form t to dom Hop coincide; cf. Theorem 5.1.18 (iv). According to Proposition 5.1.22 and Lemma 1.5.10, dom Hop is a core of s<sup>x</sup> and hence also of s<sup>x</sup> +x. On the other hand, by Theorem 5.1.18 (ii), dom Hop = dom H is also a core of t. Hence, the forms s<sup>x</sup> + x and t coincide on the common core dom Hop . This implies that the forms s<sup>x</sup> + x and t coincide. Therefore,

$$\text{dom}\,\mathfrak{t} = \text{dom}\,(\mathfrak{s}\_x + x) = \text{dom}\,(H - x)^{\frac{1}{2}}, \qquad x \le \gamma,$$

and

$$\mathfrak{st}[\varphi,\psi] = \left( (H\_{\mathrm{op}} - x)^{\frac{1}{2}} \varphi, (H\_{\mathrm{op}} - x)^{\frac{1}{2}} \psi \right) + x(\varphi, \psi), \quad \varphi, \psi \in \mathrm{dom}\, \mathfrak{t}.$$

Finally, Proposition 5.1.22 shows that a subset of dom t is a core of t if and only if it is a core of the operator (Hop − x) 1 <sup>2</sup> . This completes the proof. -

## **5.2 Ordering and monotonicity**

In this section an ordering will be introduced for semibounded closed forms t<sup>1</sup> and t2, and for semibounded self-adjoint relations H<sup>1</sup> and H<sup>2</sup> in a Hilbert space H. It will be shown that these orderings are compatible if t<sup>1</sup> and H1, and t<sup>2</sup> and H<sup>2</sup> are related via the first representation theorem (Theorem 5.1.18), respectively. An alternative formulation of the ordering of semibounded self-adjoint relations will be given in terms of their resolvent operators. The last part of the section is devoted to a general monotonicity principle in the context of semibounded selfadjoint relations or, equivalently, of closed semibounded forms.

First an ordering will be defined for semibounded forms that are not necessarily closed.

**Definition 5.2.1.** Let t<sup>1</sup> and t<sup>2</sup> be semibounded forms in H that are not necessarily closed. Then one writes t<sup>1</sup> ≤ t2, if

$$\text{dom}\,\mathfrak{t}\_2 \subset \text{dom}\,\mathfrak{t}\_1, \quad \mathfrak{t}\_1[\varphi] \le \mathfrak{t}\_2[\varphi], \quad \varphi \in \text{dom}\,\mathfrak{t}\_2. \tag{5.2.1}$$

Note that if t<sup>1</sup> ≤ t2, then t2-convergence implies t1-convergence. Indeed, let ϕ<sup>n</sup> →t<sup>2</sup> ϕ. By Definition 5.1.4, this means that

ϕ<sup>n</sup> ∈ dom t2, ϕ<sup>n</sup> → ϕ, and t2[ϕ<sup>n</sup> − ϕm] → 0.

Since t<sup>1</sup> ≤ t2, this implies that

ϕ<sup>n</sup> ∈ dom t<sup>1</sup> and t1[ϕ<sup>n</sup> − ϕm] → 0,

which shows that ϕ<sup>n</sup> →t<sup>1</sup> ϕ. Definition 5.2.1 generates a number of simple but useful observations.

**Lemma 5.2.2.** Let t1, t2, and t<sup>3</sup> be semibounded forms in H that are not necessarily closed. Then the following statements hold:


Proof. (i) This follows from the definition of t<sup>2</sup> ⊂ t1; cf. (5.1.2).

(ii) It follows from (5.2.1) that

$$\begin{aligned} \inf \left\{ \frac{\mathfrak{t}\_1[\varphi]}{||\varphi||^2} : \varphi \in \text{dom}\, \mathfrak{t}\_1, \,\varphi \neq 0 \right\} &\leq \inf \left\{ \frac{\mathfrak{t}\_1[\varphi]}{||\varphi||^2} : \varphi \in \text{dom}\, \mathfrak{t}\_2, \,\varphi \neq 0 \right\} \\ &\leq \inf \left\{ \frac{\mathfrak{t}\_2[\varphi]}{||\varphi||^2} : \varphi \in \text{dom}\, \mathfrak{t}\_2, \,\varphi \neq 0 \right\}. \end{aligned}$$

Hence, Definition 5.1.2 implies that m(t1) ≤ m(t2).

(iii) This is an immediate consequence of Definition 5.2.1.

(iv) If t<sup>1</sup> ≤ t<sup>2</sup> and t<sup>2</sup> ≤ t1, then it follows from (5.2.1) that dom t<sup>1</sup> = dom t<sup>2</sup> and that t1[ϕ] = t2[ϕ] for all ϕ ∈ dom t<sup>1</sup> = dom t2. The conclusion now follows by polarization; cf. (5.1.1).

(v) Assume that <sup>t</sup><sup>1</sup> and <sup>t</sup><sup>2</sup> are closable forms. Let <sup>ϕ</sup> <sup>∈</sup> domt2; then, by Definition 5.1.10, there exists a sequence (ϕn) in dom t<sup>2</sup> such that ϕ<sup>n</sup> →<sup>t</sup><sup>2</sup> ϕ. Recall that <sup>t</sup>2-convergence implies <sup>t</sup>1-convergence and thus <sup>ϕ</sup> <sup>∈</sup> domt1. This shows domt<sup>2</sup> <sup>⊂</sup> domt1. Therefore, Theorem 5.1.12 implies that for <sup>ϕ</sup> <sup>∈</sup> domt<sup>2</sup> one has

$$\widetilde{\mathfrak{t}}\_1[\varphi] = \lim\_{n \to \infty} \mathfrak{t}\_1[\varphi\_n] \le \lim\_{n \to \infty} \mathfrak{t}\_2[\varphi\_n] = \widetilde{\mathfrak{t}}\_2[\varphi],$$

which shows (v). -

Next an ordering will be defined for semibounded self-adjoint relations. It will be shown in Proposition 5.2.6 below that this ordering is in agreement with the notation γ ≤ H for a semibounded self-adjoint relation H with lower bound γ; cf. Definition 1.4.5. Note that the following definition relies on Lemma 1.5.10.

**Definition 5.2.3.** Let H<sup>1</sup> and H<sup>2</sup> be semibounded self-adjoint relations in H, with lower bounds m(H1) and m(H2), respectively. Then the relations H<sup>1</sup> and H<sup>2</sup> are said to be ordered, and one writes H<sup>1</sup> ≤ H2, if

$$\begin{aligned} \text{dom}\,(H\_2 - x)^{\frac{1}{2}} &\subset \text{dom}\,(H\_1 - x)^{\frac{1}{2}},\\ \|(H\_{1, \text{op}} - x)^{\frac{1}{2}}\varphi\| &\le \|(H\_{2, \text{op}} - x)^{\frac{1}{2}}\varphi\|, \quad \varphi \in \text{dom}\,(H\_2 - x)^{\frac{1}{2}},\end{aligned} \tag{5.2.2}$$

is satisfied for some, and hence for all x ≤ min {m(H1), m(H2)}.

In the next theorem it is shown that the ordering for semibounded forms in Definition 5.2.1 and the ordering for semibounded self-adjoint relations in Definition 5.2.3 are compatible. Here the second representation theorem (Theorem 5.1.23) plays an essential role.

**Theorem 5.2.4.** Let t<sup>1</sup> and t<sup>2</sup> be closed semibounded forms in H and let H<sup>1</sup> and H<sup>2</sup> be the corresponding semibounded self-adjoint relations. Then

$$\mathfrak{t}\_1 \le \mathfrak{t}\_2 \quad \Leftrightarrow \quad H\_1 \le H\_2.$$

Proof. Assume first that t<sup>1</sup> ≤ t2. Then, by Definition 5.2.1,

$$\operatorname{dom} \mathbf{t}\_2 \subset \operatorname{dom} \mathbf{t}\_1, \quad \mathbf{t}\_1[\varphi] \le \mathbf{t}\_2[\varphi], \quad \varphi \in \operatorname{dom} \mathbf{t}\_2,$$

and for all x ≤ min {m(t1), m(t2)} it follows from Theorem 5.1.23 that (5.2.2) holds. Hence, H<sup>1</sup> ≤ H<sup>2</sup> by Definition 5.2.3.

Conversely, assume that H<sup>1</sup> ≤ H2. Then, by Definition 5.2.3, (5.2.2) holds for all <sup>x</sup> <sup>≤</sup> min {m(H1), m(H2)} and hence Theorem 5.1.23 implies <sup>t</sup><sup>1</sup> <sup>≤</sup> <sup>t</sup>2. -

**Lemma 5.2.5.** Let H1, H2, and H<sup>3</sup> be semibounded self-adjoint relations in H. Then the following statements hold:


Proof. Let t<sup>i</sup> be the closed semibounded form corresponding to Hi, i = 1, 2, 3. For the proof of (i) it is sufficient to observe that

$$
\overline{\operatorname{dom}}\,H\_2 = \overline{\operatorname{dom}}\,\mathbf{t}\_2 \subset \overline{\operatorname{dom}}\,\mathbf{t}\_1 = \overline{\operatorname{dom}}\,H\_1,
$$

where Theorem 5.2.4 and Theorem 5.1.18 (iv) were used. Taking orthogonal complements then gives mul H<sup>1</sup> ⊂ mul H2. For (ii) recall that

$$m(H\_1) = m(\mathfrak{t}\_1) \le m(\mathfrak{t}\_2) = m(H\_2),$$

as follows from Lemma 5.2.2 and Theorem 5.1.18. Statements (iii) and (iv) are translations of similar statements in Lemma 5.2.2. The statement (v) is clear from Theorem 5.2.4. -

Assume that in Definition 5.2.3 the self-adjoint relation H<sup>1</sup> has a closed domain dom H1. Then the operator part H1,op of H<sup>1</sup> is a bounded operator which implies that dom H<sup>1</sup> = dom (H<sup>1</sup> − x) 1 <sup>2</sup> . Thus, in this case H<sup>1</sup> ≤ H<sup>2</sup> if and only if

$$\begin{aligned} \text{dom}\,(H\_2 - x)^{\frac{1}{2}} &\subset \text{dom}\,H\_1, \\\|(H\_{1, \text{op}} - x)\varphi, \varphi\rangle &\le \|(H\_{2, \text{op}} - x)^{\frac{1}{2}}\varphi\|^2, \quad \varphi \in \text{dom}\,(H\_2 - x)^{\frac{1}{2}}.\end{aligned} \tag{5.2.3}$$

The following proposition gives an alternative version of this statement.

**Proposition 5.2.6.** Let H<sup>1</sup> and H<sup>2</sup> be semibounded self-adjoint relations in H and assume that dom H<sup>1</sup> is closed. Then the following statements are equivalent:


Moreover, if H<sup>1</sup> ∈ **B**(H), then these statements are equivalent to

(iii) (H1ϕ, ϕ) ≤ (H2,op ϕ, ϕ), ϕ ∈ dom H2;

(iv) (H1ϕ, ϕ) ≤ (ϕ- , ϕ), {ϕ, ϕ- } ∈ H2;

and in the particular case that H<sup>1</sup> = γ1I<sup>H</sup> to


Proof. (i) ⇒ (ii) Let (i) be satisfied. Then dom H<sup>2</sup> ⊂ dom (H<sup>2</sup> − x) 1 <sup>2</sup> ⊂ dom H<sup>1</sup> by (5.2.3), and for all ϕ ∈ dom H<sup>2</sup> the inequality in (5.2.3) takes the form

$$((H\_{1, \text{op}} - x)\varphi, \varphi) \le ((H\_{2, \text{op}} - x)\varphi, \varphi),$$

which implies (ii).

(ii) ⇒ (i) Let (ii) be satisfied and let ϕ ∈ dom (H<sup>2</sup> − x) 1 <sup>2</sup> . Then there exists a sequence (ϕn) in dom H<sup>2</sup> such that

$$
\varphi\_n \to \varphi \quad \text{and} \quad (H\_{2, \text{op}} - x)^{\frac{1}{2}} \varphi\_n \to (H\_{2, \text{op}} - x)^{\frac{1}{2}} \varphi, \quad n \to \infty,
$$

since dom H<sup>2</sup> is a core of (H<sup>2</sup> − x) 1 <sup>2</sup> ; see Lemma 1.5.10. Due to the assumption one has ϕ<sup>n</sup> ∈ dom H<sup>1</sup> and

$$\left( (H\_{1, \text{op}} - x)\varphi\_n, \varphi\_n \right) \le \left( (H\_{2, \text{op}} - x)\varphi\_n, \varphi\_n \right) = \|(H\_{2, \text{op}} - x)^{\frac{1}{2}}\varphi\_n\|^2.$$

Since dom H<sup>1</sup> is closed it follows by taking the limit that

$$\|\left( (H\_{1, \text{op}} - x)\varphi, \varphi \right) \le \|(H\_{2, \text{op}} - x)^{\frac{1}{2}}\varphi\|^2, \quad \varphi \in \text{dom}\left( H\_2 - x \right)^{\frac{1}{2}}.$$

Hence, (5.2.3) is satisfied or, equivalently, H<sup>1</sup> ≤ H2.

If <sup>H</sup><sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H), then dom <sup>H</sup><sup>1</sup> <sup>=</sup> <sup>H</sup> and hence the rest of the statements is clear. -

In particular, the inequality in (v)–(vi) of Proposition 5.2.6 shows that the ordering γI<sup>H</sup> ≤ H is equivalent to H being semibounded with lower bound γ as defined in Definition 1.4.5. Furthermore, if both H<sup>1</sup> and H<sup>2</sup> are self-adjoint operators in **B**(H), then they are semibounded and Proposition 5.2.6 (iii) shows that H<sup>1</sup> ≤ H<sup>2</sup> in the sense of Definition 5.2.3 agrees with the usual definition (H1ϕ, ϕ) ≤ (H2ϕ, ϕ) for all ϕ ∈ H.

The ordering for semibounded relations H<sup>1</sup> and H<sup>2</sup> can also be expressed in terms of their resolvent operators. The next proposition is an immediate consequence of Proposition 1.5.11 (for the special case ρ = 1).

**Proposition 5.2.7.** Let H<sup>1</sup> and H<sup>2</sup> be semibounded self-adjoint relations in H. Then the following statements are equivalent:


$$\left(H\_2 - x\right)^{-1} \le \left(H\_1 - x\right)^{-1}.$$

The next corollary slightly extends Proposition 5.2.7 and gives a further interpretation of the inequality H<sup>1</sup> ≤ H<sup>2</sup> when x ≤ min {m(H1), m(H2)}. The equivalence in (5.2.4) below is an example of the antitonicity property.

**Corollary 5.2.8.** Let H<sup>1</sup> and H<sup>2</sup> be semibounded self-adjoint relations in H. Then

$$H\_1 \le H\_2$$

if and only if for γ ≤ min {m(H1), m(H2)} one has

$$(H\_2 - \gamma)^{-1} \le (H\_1 - \gamma)^{-1}.$$

In particular, if H<sup>1</sup> and H<sup>2</sup> are nonnegative self-adjoint relations, then

$$H\_1 \le H\_2 \quad \Leftrightarrow \quad H\_2^{-1} \le H\_1^{-1}. \tag{5.2.4}$$

Proof. Let H be a semibounded self-adjoint relation with γ ≤ m(H). Then H − γ is nonnegative and hence also (<sup>H</sup> <sup>−</sup> <sup>γ</sup>)−<sup>1</sup> is a nonnegative self-adjoint relation. Now write for x<γ

$$H - x = H - \gamma - (x - \gamma),$$

and apply Corollary 1.1.12 (with <sup>H</sup> replaced by (<sup>H</sup> <sup>−</sup> <sup>γ</sup>)−<sup>1</sup> and <sup>λ</sup> replaced by (<sup>x</sup> <sup>−</sup> <sup>γ</sup>)−1), obtaining

$$\left(\left(H-x\right)^{-1} - \frac{1}{x-\gamma} - \frac{1}{(x-\gamma)^2} \left( (H-\gamma)^{-1} - \frac{1}{x-\gamma} \right)^{-1} \right)$$

Hence, for the pair of semibounded self-adjoint relations H<sup>1</sup> and H<sup>2</sup> and with γ ≤ min {m(H1), m(H2)} one obtains for each x<γ:

$$\begin{aligned} &\left( (H\_1 - x)^{-1} - (H\_2 - x)^{-1} \right) \\ &= \frac{1}{(x - \gamma)^2} \left[ \left( (H\_2 - \gamma)^{-1} - \frac{1}{x - \gamma} \right)^{-1} - \left( (H\_1 - \gamma)^{-1} - \frac{1}{x - \gamma} \right)^{-1} \right]. \end{aligned}$$

Since x−γ < 0, a repeated application of Proposition 5.2.7 shows the equivalence. In fact, <sup>H</sup><sup>1</sup> <sup>≤</sup> <sup>H</sup><sup>2</sup> if and only if (H<sup>2</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> <sup>≤</sup> (H<sup>1</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> by Proposition 5.2.7, which by the above formula is equivalent to

$$\left( (H\_1 - \gamma)^{-1} - \frac{1}{x - \gamma} \right)^{-1} \le \left( (H\_2 - \gamma)^{-1} - \frac{1}{x - \gamma} \right)^{-1}.\tag{5.2.5}$$

Another application of Proposition 5.2.7 shows that the inequality (5.2.5) is equivalent to the inequality (H<sup>2</sup> <sup>−</sup> <sup>γ</sup>)−<sup>1</sup> <sup>≤</sup> (H<sup>1</sup> <sup>−</sup> <sup>γ</sup>)−1. -

As a corollary to Proposition 5.2.7 it will be shown that in the case H<sup>1</sup> ≤ H<sup>2</sup> the difference (H<sup>1</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> <sup>−</sup> (H<sup>2</sup> <sup>−</sup> <sup>x</sup>)−1, x < min {m(H1), m(H2)}, can be used to describe the gap between the corresponding form domains

$$\operatorname{dom}\left(H\_2 - x\right)^{\frac{1}{2}} \subset \operatorname{dom}\left(H\_1 - x\right)^{\frac{1}{2}}.$$

**Corollary 5.2.9.** Let H<sup>1</sup> and H<sup>2</sup> be semibounded self-adjoint relations in H and assume that

$$H\_1 \le H\_2.$$

Then for all x < min {m(H1), m(H2)} the operator (H1−x)−<sup>1</sup>−(H2−x)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) is nonnegative and

$$\operatorname{dom}\left(H\_1 - x\right)^{\frac{1}{2}} = \operatorname{ran}\left(\left(H\_1 - x\right)^{-1} - \left(H\_2 - x\right)^{-1}\right)^{\frac{1}{2}} + \operatorname{dom}\left(H\_2 - x\right)^{\frac{1}{2}}.$$

Proof. Since H<sup>1</sup> ≤ H2, the operator R(x) ∈ **B**(H), defined by

$$R(x) = \left(H\_1 - x\right)^{-1} - \left(H\_2 - x\right)^{-1},$$

is nonnegative for x < min {m(H1), m(H2)}; cf. Proposition 5.2.7. Hence, one can write

$$\begin{aligned} \left( (H\_1 - x)^{-1} = R(x) + (H\_2 - x)^{-1} \\ = \left( R(x)^{\frac{1}{2}} \quad (H\_2 - x)^{-\frac{1}{2}} \right) \begin{pmatrix} R(x)^{\frac{1}{2}} \\ (H\_2 - x)^{-\frac{1}{2}} \end{pmatrix}. \end{aligned} \tag{5.2.6}$$

Now recall that if T = (A B) is a row operator with A, B ∈ **B**(H), then it follows from ran (T T <sup>∗</sup>) 1 <sup>2</sup> = ran |T <sup>∗</sup>| = ran T, cf. Corollary D.6, that

$$\text{ran}\,(AA^\* + BB^\*)^{\frac{1}{2}} = \text{ran}\,(A\,\,B) = \text{ran}\,A + \text{ran}\,B.\tag{5.2.7}$$

Hence, taking square roots in the identity (5.2.6) and applying (5.2.7) shows that

$$\text{ran}\,(H\_1 - x)^{-\frac{1}{2}} = \text{ran}\,R(x)^{\frac{1}{2}} + \text{ran}\,(H\_2 - x)^{-\frac{1}{2}},$$

which yields the desired decomposition

$$\operatorname{dom}\left(H\_1 - x\right)^{\frac{1}{2}} = \operatorname{ran} R(x)^{\frac{1}{2}} + \operatorname{dom}\left(H\_2 - x\right)^{\frac{1}{2}}$$

for x < min {m(H1), m(H2)}. -

Now the ordering for semibounded self-adjoint relations and for semibounded closed forms will be used to reinterpret and extend the monotonicity result in Proposition 1.9.9

For the proof of the following theorem it is useful to have available an auxiliary result concerning the interchange of limits. Let (fn) be a nondecreasing sequence of real nondecreasing functions defined on an open interval (a, b). Thus, for all x ∈ (a, b) one has

$$f\_m(x) \le f\_n(x), \quad m \le n,\tag{5.2.8}$$

and for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>

$$f\_n(x) \le f\_n(y), \quad a < x \le y < b. \tag{5.2.9}$$

In view of (5.2.8) the pointwise limit

$$f\_{\infty}(x) = \lim\_{n \to \infty} f\_n(x), \quad x \in (a, b), \tag{5.2.10}$$

gives a function <sup>f</sup><sup>∞</sup> : (a, b) <sup>→</sup> <sup>R</sup> ∪ {∞} that is nondecreasing, thanks to (5.2.9). This is clear when all f∞(x) are finite, in which case lim<sup>x</sup>→<sup>b</sup> f∞(x) is proper or improper. However, if f∞(x0) = ∞ for some x<sup>0</sup> ∈ (a, b), then (5.2.9) shows that f∞(x) = ∞ for all x<sup>0</sup> ≤ x<b. In this case the function f<sup>∞</sup> is also called nondecreasing (in the sense of <sup>R</sup> ∪ {∞}) and one defines lim<sup>x</sup>→<sup>b</sup> <sup>f</sup>∞(x) = <sup>∞</sup>. In view of (5.2.9) the limit

$$f\_n(b) = \lim\_{x \to b^{-}} f\_n(x), \quad n \in \mathbb{N}, \tag{5.2.11}$$

gives a sequence with values in <sup>R</sup> ∪ {∞} that is nondecreasing, thanks to (5.2.8). This is again clear when all limits fn(b) are finite in which case lim<sup>n</sup>→∞ fn(b) is proper or improper. However, if there exists some <sup>m</sup> <sup>∈</sup> <sup>N</sup> for which <sup>f</sup>m(b) = <sup>∞</sup>, then for all <sup>n</sup> <sup>≥</sup> <sup>m</sup> one has fn(b) = ∞. In this case one defines lim<sup>n</sup>→∞ fn(b) = ∞.

**Lemma 5.2.10.** *Let* (fn) *be a nondecreasing sequence of nondecreasing functions defined on some open interval* (a, b)*. Let* f<sup>∞</sup> *be the nondecreasing limit function in* (5.2.10) *and let* (fn(b)) *be the nondecreasing sequence of limits in* (5.2.11)*. Then*

$$\lim\_{x \to b} f\_{\infty}(x) = \lim\_{n \to \infty} f\_n(b). \tag{5.2.12}$$

*In particular, both limits in* (5.2.12) *are finite or infinite simultaneously.*

*Proof.* Consider the case that all values of f<sup>∞</sup> are real. Since fn(x) ≤ f∞(x) for all <sup>x</sup> <sup>∈</sup> (a, b), it follows that for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>

$$f\_n(b) = \lim\_{x \to b} f\_n(x) \le \lim\_{x \to b} f\_{\infty}(x).$$

This implies

$$\lim\_{n \to \infty} f\_n(b) \le \lim\_{x \to b} f\_{\infty}(x),\tag{5.2.13}$$

where the limits may be infinite. Assume that there is strict inequality in (5.2.13). First consider the case lim<sup>x</sup>→<sup>b</sup> f∞(x) < ∞. Then clearly there exists some δ > 0 for which

$$\delta + \lim\_{n \to \infty} f\_n(b) < \lim\_{x \to b} f\_{\infty}(x). \tag{5.2.14}$$

Next consider the case lim<sup>x</sup>→<sup>b</sup> f∞(x) = ∞. Then lim<sup>n</sup>→∞ fn(b) < ∞ (otherwise there would be equality in (5.2.13)) and (5.2.14) holds for any δ > 0. In each case, there exists some x ∈ (a, b) such that

$$
\delta + \lim\_{n \to \infty} f\_n(b) < f\_{\infty}(x).
$$

From this one concludes

$$
\delta + f\_{\infty}(x) = \delta + \lim\_{n \to \infty} f\_n(x) \le \delta + \lim\_{n \to \infty} f\_n(b) < f\_{\infty}(x);
$$

a contradiction. Hence, there is equality in (5.2.13). It remains to consider the situation where f∞(x0) = ∞ for some a<x<sup>0</sup> < b. In this case f∞(x) = ∞ for all x<sup>0</sup> <x<b and lim<sup>x</sup>→<sup>b</sup> f∞(x) = ∞. Assume that

$$L = \lim\_{n \to \infty} f\_n(b) < \infty.$$

For any x<sup>0</sup> ≤ x<b one has

$$f\_n(x) \le f\_n(b) \le L,$$

which implies that lim<sup>n</sup>→∞ fn(x) ≤ L; a contradiction. Again, there is equality in (5.2.13). - **Theorem 5.2.11** (Monotonicity principle)**.** Let (Hn) be a nondecreasing sequence of semibounded self-adjoint relations in H and let γ ≤ m(H1). Then there exists a semibounded self-adjoint relation H<sup>∞</sup> with γ ≤ m(H∞) and H<sup>n</sup> ≤ H<sup>∞</sup> such that H<sup>n</sup> → H<sup>∞</sup> in the strong resolvent sense, i.e.,

$$(H\_n - \lambda)^{-1}\varphi \to \left(H\_\infty - \lambda\right)^{-1}\varphi, \quad \varphi \in \mathfrak{H}, \quad \lambda \in \mathbb{C} \backslash [\gamma, \infty). \tag{5.2.15}$$

Furthermore, H<sup>∞</sup> satisfies

$$\begin{aligned} &\text{dom}\,(H\_{\infty}-\gamma)^{\frac{1}{2}}\\ &= \left\{ \varphi \in \bigcap\_{n=1}^{\infty} \text{dom}\,(H\_n-\gamma)^{\frac{1}{2}} : \lim\_{n \to \infty} \|(H\_{n,\text{op}}-\gamma)^{\frac{1}{2}}\varphi\| < \infty \right\} \end{aligned} \tag{5.2.16}$$

and for all ϕ ∈ dom (H<sup>∞</sup> − γ) 1 <sup>2</sup> it holds that

$$\|(H\_{\infty, \text{op}} - \gamma)^{\frac{1}{2}} \varphi\| = \lim\_{n \to \infty} \|(H\_{n, \text{op}} - \gamma)^{\frac{1}{2}} \varphi\|. \tag{5.2.17}$$

Proof. The assumption H<sup>n</sup> ≤ H<sup>m</sup> for n ≤ m and Proposition 5.2.7 lead to

$$0 \le (H\_m - x)^{-1} \le (H\_n - x)^{-1}, \quad x < \gamma,$$

where γ ≤ m(H1). Hence, by Proposition 1.9.14, there exists a semibounded selfadjoint relation H<sup>∞</sup> with γ ≤ m(H∞) such that

$$0 \le (H\_{\infty} - x)^{-1} \le (H\_n - x)^{-1}, \quad x < \gamma,\tag{5.2.18}$$

and <sup>H</sup><sup>n</sup> converges to <sup>H</sup><sup>∞</sup> in the strong resolvent sense on <sup>C</sup> \ [γ, <sup>∞</sup>), that is, (5.2.15) holds.

It remains to prove (5.2.16) and (5.2.17). It follows from Corollary 1.1.12 with H replaced by H<sup>n</sup> − γ and H<sup>∞</sup> − γ, respectively, that for x < 0 one has

$$\begin{aligned} &\left(\left\{(H\_n-\gamma)^{-1}-x\right\}^{-1}\varphi,\varphi\right)-\left(\left\{(H\_\infty-\gamma)^{-1}-x\right\}^{-1}\varphi,\varphi\right) \\ &=\frac{1}{x^2}\left[\left(\left((H\_\infty-\gamma)-\frac{1}{x}\right)^{-1}\varphi,\varphi\right)-\left(\left((H\_n-\gamma)-\frac{1}{x}\right)^{-1}\varphi,\varphi\right)\right].\end{aligned}$$

Since γ + 1/x < γ, the right-hand side tends to zero monotonically from below for n → ∞, as follows from (5.2.15) and (5.2.18); but then also the left-hand side tends to zero monotonically from below.

To complete the proof, consider the functions, defined for ϕ ∈ H and x < 0 by

$$f\_n(x) = \left(\left((H\_n - \gamma)^{-1} - x\right)^{-1}\varphi, \varphi\right)$$

and

$$f\_{\infty}(x) = \left(\left((H\_{\infty} - \gamma)^{-1} - x\right)^{-1}\varphi, \varphi\right).$$

The above argument shows that the sequence f<sup>n</sup> is nondecreasing with f<sup>∞</sup> as pointwise limit. It follows from Lemma 1.5.12 (with H replaced by H<sup>n</sup> − γ and H<sup>∞</sup> − γ, respectively), that both functions f<sup>n</sup> and f<sup>∞</sup> are nondecreasing on the interval (−∞, 0) and that

$$\begin{split} f\_n(0) &= \lim\_{x \uparrow 0} \left( ((H\_n - \gamma)^{-1} - x)^{-1} \varphi, \varphi \right) \\ &= \begin{cases} \| (H\_{n, \text{op}} - \gamma)^{\frac{1}{2}} \varphi \|^{2}, & \varphi \in \text{dom} \left( H\_n - \gamma \right)^{\frac{1}{2}}, \\ \infty, & \text{otherwise}, \end{cases} \end{split} \tag{5.2.19}$$

while

$$\begin{split} f\_{\infty}(0) &= \lim\_{x \uparrow 0} \left\{ ((H\_{\infty} - \gamma)^{-1} - x)^{-1} \varphi, \varphi \right\} \\ &= \begin{cases} \| (H\_{\infty, \text{op}} - \gamma)^{\frac{1}{2}} \varphi \| ^2, & \varphi \in \text{dom} \left( H\_{\infty} - \gamma \right)^{\frac{1}{2}}, \\ \infty, & \text{otherwise}. \end{cases} \end{split} \tag{5.2.20}$$

Hence, by Lemma 5.2.10,

$$\lim\_{n \to \infty} f\_n(0) = f\_{\infty}(0),\tag{5.2.21}$$

where the limits in (5.2.21) are finite or infinite simultaneously.

Assume that ϕ ∈ dom (H<sup>∞</sup> − γ) 1 <sup>2</sup> . Then, by (5.2.20), f∞(0) < ∞, which, in view of (5.2.21), implies that all fn(0) < ∞. Hence, ϕ ∈ 8<sup>∞</sup> <sup>n</sup>=1 dom (H<sup>n</sup> − γ) 1 <sup>2</sup> by (5.2.19), and (5.2.21) reads

$$\lim\_{n \to \infty} \|(H\_{n, \text{op}} - \gamma)^{\frac{1}{2}} \varphi\|^2 = \|(H\_{\infty, \text{op}} - \gamma)^{\frac{1}{2}} \varphi\|^2. \tag{5.2.22}$$

Thus, ϕ belongs to the right-hand side of (5.2.16). This shows the inclusion (⊂) in (5.2.16), and (5.2.22) gives (5.2.17).

Conversely, assume that ϕ belongs to the right-hand side of (5.2.16), that is, ϕ ∈ 8<sup>∞</sup> <sup>n</sup>=1 dom (H<sup>n</sup> − γ) 1 <sup>2</sup> and

$$\lim\_{n \to \infty} \|(H\_{n, \text{op}} - \gamma)^{\frac{1}{2}} \varphi\| < \infty.$$

By (5.2.19) one sees that fn(0) < ∞ and that limn→∞ fn(0) < ∞. It follows from (5.2.21) that f∞(0) < ∞. Now apply (5.2.20) to conclude that ϕ ∈ dom (H∞−γ) 1 2 . This shows the inclusion (⊃) in (5.2.16). -

**Corollary 5.2.12.** Let (Hn) be a nondecreasing sequence of semibounded self-adjoint relations and let H<sup>∞</sup> be the strong resolvent limit as in Theorem 5.2.11. Then the following statements hold:


Proof. (i) Assume that H<sup>n</sup> ≤ K. Then for all x<γ ≤ m(H1)

$$0 \le ((K - x)^{-1} \varphi, \varphi) \le ((H\_n - x)^{-1} \varphi, \varphi), \quad \varphi \in \mathfrak{H}.$$

By (5.2.15), (H<sup>n</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup><sup>ϕ</sup> <sup>→</sup> (H<sup>∞</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup><sup>ϕ</sup> for <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup> and one concludes that

$$0 \le ((K - x)^{-1} \varphi, \varphi) \le ((H\_{\infty} - x)^{-1} \varphi, \varphi), \quad \varphi \in \mathfrak{H}.$$

Hence, by Proposition 5.2.7 it follows that H<sup>∞</sup> ≤ K.

(ii) Assume that {ϕ, ϕ- } ∈ S. Then {ϕ, ϕ- } ∈ H<sup>n</sup> by assumption and hence for all <sup>n</sup> <sup>∈</sup> <sup>N</sup> one has

$$(H\_n - \lambda)^{-1}(\varphi' - \lambda \varphi) = \varphi, \quad \lambda \in \mathbb{C} \backslash [\gamma, \infty).$$

By (5.2.15), (H<sup>n</sup> <sup>−</sup> <sup>x</sup>)−1<sup>ψ</sup> <sup>→</sup> (H<sup>∞</sup> <sup>−</sup> <sup>x</sup>)−1<sup>ψ</sup> for <sup>ψ</sup> <sup>∈</sup> <sup>H</sup> and one concludes that

$$(H\_{\infty} - \lambda)^{-1}(\varphi' - \lambda \varphi) = \lim\_{n \to \infty} (H\_n - \lambda)^{-1}(\varphi' - \lambda \varphi) = \varphi,$$

which gives {ϕ, ϕ- } ∈ <sup>H</sup>∞. Hence, <sup>S</sup> <sup>⊂</sup> <sup>H</sup>∞. -

Now consider the special case of a nondecreasing sequence of self-adjoint operators (Hn) in **B**(H). Then it is clear that H<sup>∞</sup> is a semibounded self-adjoint relation which is an operator in **B**(H) if and only if the sequence (Hn) is uniformly bounded; cf. Corollary 1.9.10 and the beginning of Section 1.9. The following corollary shows that the domain of the square root of H<sup>∞</sup> − γ, γ ≤ m(H1), is given by those ϕ ∈ H for which (Hnϕ, ϕ) has a finite limit as n → ∞.

**Corollary 5.2.13.** Let (Hn) be a nondecreasing sequence of self-adjoint operators in **B**(H) with γ ≤ m(H1) and define

$$\mathfrak{E} = \left\{ \varphi \in \mathfrak{H} : \lim\_{n \to \infty} (H\_n \varphi, \varphi) < \infty \right\}.$$

Let H<sup>∞</sup> be the semibounded self-adjoint limit of the sequence Hn. Then

$$\mathfrak{E} = \text{dom}\,(H\_{\infty} - \gamma)^{\frac{1}{2}}.$$

In particular, one has


A useful variant of Corollary 5.2.13 is concerned with a nondecreasing function M : (a, b) → **B**(H), whose values are self-adjoint operators; cf. Corollary 2.3.8. Then there exists a self-adjoint limit at the right endpoint b, which can be retrieved via sequences converging to b. For the existence of the self-adjoint limit at the left endpoint a consider the function x → −M(x), which is nondecreasing when x ∈ (a, b) tends to a.

**Corollary 5.2.14.** Let M : (a, b) → **B**(H) be a nondecreasing function, whose values are self-adjoint operators. Then there exist self-adjoint relations M(a) and M(b) in H such that M(x) → M(b) in the strong resolvent sense when x → b and M(x) → M(a) in the strong resolvent sense when x → a. Furthermore,

$$M(x) \le M(b) \quad \text{and} \quad -M(x) \le -M(a), \qquad x \in (a, b).$$

Define

$$\mathfrak{E}\_b = \left\{ \varphi \in \mathfrak{H} : \lim\_{x \uparrow b} (M(x)\varphi, \varphi) < \infty \right\}.$$

and

$$\mathfrak{E}\_a = \left\{ \varphi \in \mathfrak{H} : \lim\_{x \downarrow a} (M(x)\varphi, \varphi) > -\infty \right\}.$$

Then for c = a or c = b one has

(i) E<sup>c</sup> = H ⇔ M(c) ∈ **B**(H);


Let (tn) be a nondecreasing sequence of closed semibounded forms in H which satisfy γ ≤ m(t1). By Theorem 5.1.18, there exist unique semibounded self-adjoint relations Hn, bounded from below by γ, which correspond to tn. According to Theorem 5.2.4, the sequence (Hn) is nondecreasing. By the monotonicity principle in Theorem 5.2.11 the strong resolvent limit of the sequence (Hn) exists as a semibounded self-adjoint relation H<sup>∞</sup> with lower bound γ such that H<sup>n</sup> ≤ H∞. Let t<sup>∞</sup> be the form corresponding to H<sup>∞</sup> by Proposition 5.1.19. Then t<sup>∞</sup> is bounded below by γ and t<sup>n</sup> ≤ t<sup>∞</sup> by Theorem 5.2.4. Therefore, the following theorem concerning a nondecreasing sequence of forms may be seen as a direct consequence of Theorem 5.2.11.

**Theorem 5.2.15** (Monotonicity principle)**.** Let (tn) be a nondecreasing sequence of closed semibounded forms in H and let γ ≤ m(t1). Then there exists a closed semibounded form t<sup>∞</sup> with γ ≤ m(t∞) such that t<sup>n</sup> ≤ t<sup>∞</sup> and

$$\text{dom}\,\mathbf{t}\_{\infty} = \left\{ \varphi \in \bigcap\_{n=1}^{\infty} \text{dom}\,\mathbf{t}\_{n} : \lim\_{n \to \infty} \mathbf{t}\_{n}[\varphi] < \infty \right\} \tag{5.2.23}$$

and

$$\mathfrak{t}\_{\infty}[\varphi] = \lim\_{n \to \infty} \mathfrak{t}\_{n}[\varphi], \quad \varphi \in \text{dom } \mathfrak{t}\_{\infty}. \tag{5.2.24}$$

Moreover, the relations H<sup>n</sup> corresponding to the forms t<sup>n</sup> converge in the strong resolvent sense to the relation H<sup>∞</sup> corresponding to the form t∞.

Proof. It is clear from Theorem 5.1.23 and the formulas (5.2.16) and (5.2.17) in Theorem 5.2.11 that the limit form <sup>t</sup><sup>∞</sup> satisfies (5.2.23) and (5.2.24). -

## **5.3 Friedrichs extensions of semibounded relations**

A semibounded, not necessarily closed, relation S in a Hilbert space H has equal defect numbers, and hence admits self-adjoint extensions in H. It will be shown that such a relation S has a distinguished semibounded self-adjoint extension SF, the so-called Friedrichs extension of S, with m(SF) = m(S). The construction of this extension involves a closed semibounded form associated with S. The characteristic properties of this extension will be investigated in detail.

Let S be a semibounded relation in H. Recall from Lemma 5.1.17 that the form t<sup>S</sup> given by

$$\text{tr}\_S[f,g] = (f',g), \quad \{f,f'\}, \{g,g'\} \in S,\tag{5.3.1}$$

is well defined and that it is semibounded with the lower bound m(S). Moreover, it has been shown that <sup>t</sup><sup>S</sup> is closable and that the closure t<sup>S</sup> of <sup>t</sup><sup>S</sup> is a semibounded closed form whose lower bound is equal to m(S). Also, dom t<sup>S</sup> = dom S is a core of tS.

**Lemma 5.3.1.** Let <sup>S</sup> be a semibounded relation in <sup>H</sup> with lower bound <sup>m</sup>(S). Let t<sup>S</sup> be the closure of the form t<sup>S</sup> in (5.3.1). Then the unique relation S<sup>F</sup> corresponding to t<sup>S</sup> via Theorem 5.1.18 is a semibounded self-adjoint extension of <sup>S</sup> with the lower bound m(SF) = m(S). In fact, S<sup>F</sup> is a self-adjoint extension of the semibounded relation <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗), so that

$$S \subset S \stackrel{\frown}{+} \dot{\mathfrak{N}}\_{\infty}(S^\*) \subset S\_{\sf F}, \quad where \quad \dot{\mathfrak{N}}\_{\infty}(S^\*) = \{0\} \times \operatorname{mult} S^\*. \tag{5.3.2}$$

Moreover,

$$S\_{\mathcal{F}} = (\overline{S})\_{\mathcal{F}}.\tag{5.3.3}$$

Proof. By Theorem 5.1.18, the closed form t<sup>S</sup> induces a unique semibounded selfadjoint relation S<sup>F</sup> in H such that

> tS[f,g]=(f- , g), {f,f- } ∈ <sup>S</sup>F, g <sup>∈</sup> domtS.

To show that S<sup>F</sup> is an extension of S, let {f,f- } ∈ S. As f ∈ dom tS, it follows that for all g ∈ dom t<sup>S</sup>

$$\mathfrak{t}\_S[f,g] = \mathfrak{t}\_S[f,g] = (f',g).$$

Since dom <sup>t</sup><sup>S</sup> = dom <sup>S</sup> is a core of tS, one obtains {f,f- } ∈ S<sup>F</sup> from Theorem 5.1.18 (iii). Hence, S<sup>F</sup> is a self-adjoint extension of S with lower bound m(SF) = m(S).

In order to verify (5.3.2) it now suffices to see that {0} × mul S<sup>∗</sup> ⊂ SF. Let <sup>ϕ</sup> <sup>∈</sup> mul <sup>S</sup><sup>∗</sup> and let <sup>g</sup> <sup>∈</sup> dom <sup>S</sup>, then clearly tS[0, g] = 0 and (ϕ, g) = 0. Therefore,

$$\text{ts}[0, g] = (\varphi, g) \quad \text{for all} \quad g \in \text{dom}\, S.$$

Since dom <sup>S</sup> is a core of tS, it follows that {0, ϕ} ∈ <sup>S</sup>F.

To see that (5.3.3) holds, it suffices to recall from Lemma 5.1.17 that t<sup>S</sup> <sup>=</sup> t<sup>S</sup> holds for the closures of <sup>t</sup><sup>S</sup> and <sup>t</sup>S. - **Definition 5.3.2.** Let S be a semibounded relation in H. The semibounded selfadjoint relation S<sup>F</sup> associated with the closure of the form t<sup>S</sup> in (5.3.1) is called the Friedrichs extension of <sup>S</sup>. The closure t<sup>S</sup> of the form <sup>t</sup><sup>S</sup> will be denoted by <sup>t</sup><sup>S</sup><sup>F</sup> , so that <sup>t</sup><sup>S</sup><sup>F</sup> <sup>=</sup> tS.

Let S be a semibounded relation in H and let a<m(S). Then S − a is a nonnegative relation and it is a consequence of (5.3.1) that t<sup>S</sup>−<sup>a</sup> = t<sup>S</sup> − a. The translation invariance of the closures, cf. (5.1.17), leads to

$$\mathbf{t}\_{(S-a)\mathbb{F}} = \widehat{\mathbf{t}s\_{-a}} = \widehat{\mathbf{t}s - a} = \mathbf{\tilde{t}s} - a = \mathbf{t}\_{\mathbb{F}} - a.$$

The nonnegative self-adjoint relation (S − a)<sup>F</sup> corresponding to the form on the left-hand side is equal to the nonnegative self-adjoint relation corresponding to the form on the right-hand side. Thus, one obtains

$$(S - a)\_{\mathcal{F}} = S\_{\mathcal{F}} - a, \quad a < m(S). \tag{5.3.4}$$

In other words, the Friedrichs extension is translation invariant.

By Lemma 5.3.1, the Friedrichs extension S<sup>F</sup> is a semibounded self-adjoint extension of S. As a restriction of S<sup>∗</sup> the Friedrichs extensions can be characterized as follows.

**Theorem 5.3.3.** Let S be a semibounded relation in H. The Friedrichs extension S<sup>F</sup> of S admits the representation

$$S\_{\mathcal{F}} = \left\{ \{f, f'\} \in S^\* \, : \, f \in \text{dom}\, \mathbf{t}\_{S\_{\mathcal{F}}} \right\} \tag{5.3.5}$$

with mul S<sup>F</sup> = mul S∗. Furthermore, if H is a self-adjoint extension of S, that is not necessarily semibounded, then

$$\text{dom}\,H \subset \text{dom}\,\mathfrak{t}\_{\text{SF}} \quad \Rightarrow \quad H = S\_{\text{F}}.$$

Proof. In order to show that S<sup>F</sup> is contained in the right-hand side of (5.3.5), let {f,f- } ∈ SF. Clearly, S ⊂ S<sup>F</sup> and since S<sup>F</sup> is self-adjoint this implies S<sup>F</sup> ⊂ S∗, so that {f,f- } ∈ S∗. Note also that f ∈ dom S<sup>F</sup> ⊂ dom tS<sup>F</sup> . Hence, S<sup>F</sup> is contained in the right-hand side of (5.3.5).

To show the opposite inclusion, let {f,f- } ∈ S<sup>∗</sup> be such that f ∈ dom tS<sup>F</sup> . Then there exists a sequence (fn) in dom t<sup>S</sup> = dom S with

$$f\_n \to\_{\mathfrak{t}\_{\mathcal{S}\_{\mathbb{P}}}} f.$$

Let {fn, f- <sup>n</sup>} be corresponding elements in S and let {g, g- } ∈ S be arbitrary. Then t<sup>S</sup> ⊂ tS<sup>F</sup> and S ⊂ S<sup>∗</sup> imply that

$$\mathfrak{t}\_{\mathbb{S}^{\mathbb{P}}}[f\_n, g] = \mathfrak{t}\_{\mathbb{S}}[f\_n, g] = (f'\_n, g) = (f\_n, g').$$

Since t<sup>S</sup><sup>F</sup> is a closed form, it follows that

$$\mathfrak{t}\_{S^\mathbb{P}}[f,g] = (f,g').$$

As {f,f- } ∈ S<sup>∗</sup> and {g, g- } ∈ S, one obtains (f,g- )=(f- , g), so that

$$\mathfrak{t}\_{\mathrm{Sp}}[f,g] = (f',g).$$

This identity holds for an arbitrary element g ∈ dom S. Since dom S = dom t<sup>S</sup> is a core of the form tS<sup>F</sup> it follows from Theorem 5.1.18 (iii) that {f,f- } ∈ SF.

Now let H be any self-adjoint extension of S with dom H ⊂ dom tS<sup>F</sup> . Hence, if {f,f- } ∈ H, then {f,f- } ∈ S<sup>∗</sup> and f ∈ dom H ⊂ dom tS<sup>F</sup> . By (5.3.5), one has {f,f- } ∈ SF. This shows H ⊂ SF, and since both H and S<sup>F</sup> are self-adjoint, it follows that H = SF. -

According to Theorem 5.3.3, the inclusion dom H ⊂ dom tS<sup>F</sup> for any selfadjoint extension H of S implies H = SF. Note that in general a self-adjoint extension of a semibounded relation is not necessarily semibounded. The situation is different when S has finite defect numbers; cf. Proposition 5.5.8.

The construction of S<sup>F</sup> in (5.3.5) results in a description of S<sup>F</sup> by means of approximating elements from the graph of S.

**Corollary 5.3.4.** Let S be a semibounded relation in H. Then S<sup>F</sup> is the set of all elements {f,f- } ∈ S<sup>∗</sup> for which there exists a sequence ({fn, f- <sup>n</sup>}) in S such that

> f<sup>n</sup> → f and (f- <sup>n</sup>, fn) → (f- , f).

Proof. By Theorem 5.3.3, S<sup>F</sup> is the set of all elements {f,f- } ∈ S<sup>∗</sup> for which f ∈ dom tS<sup>F</sup> . Hence, there exists a sequence ({fn, f- <sup>n</sup>}) in S such that

$$f\_n \to\_{\mathfrak{t}\_{\mathcal{S}\_\mathbb{P}}} f.$$

In particular, f<sup>n</sup> → f in H and, moreover,

$$\mathfrak{t}\left(f',f\right) = \mathfrak{t}\_{\mathrm{Sp}}[f,f] = \lim\_{n \to \infty} \mathfrak{t}\_{\mathrm{Sp}}[f\_n,f\_n] = \lim\_{n \to \infty} \mathfrak{t}\_S[f\_n,f\_n] = \lim\_{n \to \infty} \mathfrak{t}\left(f'\_n,f\_n\right).$$

Hence, S<sup>F</sup> is contained in the relation

$$\left\{ \{ f, f' \} \in S^\* \,:\, f\_n \to f \text{ and } (f'\_n, f\_n) \to (f', f) \text{ for some } \{ f\_n, f'\_n \} \in S \right\}.$$

Observe that this relation is symmetric since (f- <sup>n</sup>, fn) <sup>∈</sup> <sup>R</sup> implies (f- , f) <sup>∈</sup> <sup>R</sup>. Thus, the self-adjoint relation S<sup>F</sup> is contained in the symmetric relation above, and therefore they coincide. -

$$\text{Since } S \subset S\_{\mathcal{F}} \subset S^\* \text{ and } \text{dom } S\_{\mathcal{F}} \subset \text{dom } S \text{ by Corollary 5.3.4 one has that}$$

$$\text{dom } S \subset \text{dom } S\_{\mathcal{F}} \subset \left( \overline{\text{dom }} S \cap \text{dom } S^\* \right).$$

The next corollary shows when these inclusions are identities.

**Corollary 5.3.5.** Let S be a semibounded relation in H. Then

$$S \stackrel{\frown}{+} \dot{\mathfrak{N}}\_{\infty}(S^\*) = S\_{\text{F}}, \quad \dot{\mathfrak{N}}\_{\infty}(S^\*) = \{0\} \times \text{mult} \, S^\*,\tag{5.3.6}$$

if and only if

$$\operatorname{dom} S = \overline{\operatorname{dom}} S \cap \operatorname{dom} S^\*.$$

In particular, if dom S is closed, then S<sup>F</sup> has the form (5.3.6).

Proof. Recall from (5.3.2) that <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗) <sup>⊂</sup> <sup>S</sup>F. Note that there is equality if and only if <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>∞</sup>(S∗) is self-adjoint. Hence, the assertion follows from Lemma 1.5.7. -

The construction of the Friedrichs extension S<sup>F</sup> of a semibounded relation S via the form t<sup>S</sup> in (5.3.1) leads to an important characteristic property. First of all, recall that

$$\mathfrak{t}\_{\mathbb{S}^{\mathbb{F}}}[f,g] = (f',g), \quad \{f,f'\} \in S\_{\mathbb{F}}, \quad g \in \text{dom } \mathfrak{t}\_{\mathbb{S}^{\mathbb{F}}},$$

with lower bound m(SF) = m(S). Now assume that H is another semibounded self-adjoint extension of S. Then clearly m(H) ≤ m(S), and according to Proposition 5.1.19, the relation H generates a closed semibounded form t<sup>H</sup> on H with

$$\mathfrak{t}\_H[f,g] = (f',g), \quad \{f,f'\} \in H, \quad g \in \text{dom } \mathfrak{t}\_H.$$

By specializing {f,f- } ∈ S and g ∈ dom S, it follows that

$$\mathfrak{t}\_H[f,g] = (f',g) = \mathfrak{t}\_S[f,g]$$

and hence <sup>t</sup><sup>S</sup> <sup>⊂</sup> <sup>t</sup>H. By construction, <sup>t</sup><sup>S</sup> <sup>⊂</sup> <sup>t</sup> <sup>=</sup> <sup>t</sup>S<sup>F</sup> and hence <sup>t</sup>S<sup>F</sup> is the smallest closed form extension of tS; cf. Theorem 5.1.12. Therefore,

$$\mathbf{t}\_S \subset \mathbf{t}\_{S\circ} \subset \mathbf{t}\_H. \tag{5.3.7}$$

This leads to the extremality property of S<sup>F</sup> stated in the next result.

**Proposition 5.3.6.** Let S be a semibounded relation in H and let H be a semibounded self-adjoint extension of S. Then m(tH) = m(H) ≤ m(SF) = m(S) and

$$\mathfrak{t}\_H \le \mathfrak{t}\_{\mathbb{S}^\mathbb{P}} \quad or \quad H \le S\_{\mathbb{F}};$$

or, equivalently, for some, and hence for all a<m(H),

$$(S\_{\mathcal{F}} - a)^{-1} \le (H - a)^{-1}.$$

Proof. It is a consequence of (5.3.7) and Lemma 5.2.2 (i) that t<sup>H</sup> ≤ tS<sup>F</sup> . The rest of the statements follow from Theorem 5.2.4 and Proposition 5.2.7. -

According to Proposition 5.3.6, the Friedrichs extension S<sup>F</sup> is the largest semibounded self-adjoint extension of S in the sense of the ordering for forms or relations, and so it has the smallest form domain. Recall that for any semibounded self-adjoint extension H of S one has for a<m(H) ≤ m(SF) that

$$\operatorname{dom}\left(H - a\right)^{\frac{1}{2}} = \operatorname{ran} R(a)^{\frac{1}{2}} + \operatorname{dom}\left(S\_{\mathcal{F}} - a\right)^{\frac{1}{2}},\tag{5.3.8}$$

where the nonnegative operator R(a) is defined by

$$R(a) = (H - a)^{-1} - (S\_\mathcal{F} - a)^{-1} \in \mathbf{B}(\mathfrak{H});\tag{5.3.9}$$

cf. Corollary 5.2.9. The identity (5.3.8) will now be put in a geometric context. Recall that for a<m(H) the closed nonnegative form t<sup>H</sup> − a on H defines the following inner product on dom t<sup>H</sup>

$$\mathbf{t}\_{\mathbf{f}}(f,g)\_{\mathfrak{t}\_H-a} = \mathbf{t}\_H[f,g] - a(f,g), \quad f,g \in \text{dom } \mathfrak{t}\_H = \text{dom}\,(H-a)^{\frac{1}{2}},\tag{5.3.10}$$

which makes the space HtH−<sup>a</sup> = domt<sup>H</sup> = dom(H −a) 1 <sup>2</sup> complete; cf. Lemma 5.1.8. Similarly, the closed nonnegative form tS<sup>F</sup> − a on H defines the following inner product on dom tS<sup>F</sup> :

$$(f,g)\_{\mathfrak{t}\_{\mathcal{F}\_{\mathcal{F}}}-a} = \mathfrak{t}\_{\mathcal{F}}[f,g] - a(f,g), \quad f,g \in \text{dom}\,\mathfrak{t}\_{\mathcal{F}} = \text{dom}\,(S\_{\mathcal{F}} - a)^{\frac{1}{2}},\tag{5.3.11}$$

which makes the space HtSF−<sup>a</sup> = dom tS<sup>F</sup> = dom (S<sup>F</sup> − a) 1 <sup>2</sup> complete. Thus, in terms of inner product spaces one obtains

$$
\mathfrak{H}\_{\mathbf{t}\_{\mathrm{Sp}} - a} \subset \mathfrak{H}\_{\mathbf{t}\_H - a},
$$

and by (5.3.7) the restriction of the inner product in (5.3.10) to HtSF−<sup>a</sup> coincides with the inner product in (5.3.11). Therefore,

$$\mathfrak{H}\_{\mathfrak{t}\_H - a} = \left( \mathfrak{H}\_{\mathfrak{t}\_H - a} \ominus\_{\mathfrak{t}\_H - a} \mathfrak{H}\_{\mathfrak{t}\_{\mathfrak{S}\_F} - a} \right) \oplus\_{\mathfrak{t}\_H - a} \mathfrak{H}\_{\mathfrak{t}\_{\mathfrak{S}\_F} - a},\tag{5.3.12}$$

see Corollary 5.1.13. In terms of the spaces HtH−<sup>a</sup> and HtSF−<sup>a</sup> the sum decomposition in (5.3.8) may be rewritten as

$$\mathfrak{H}\_{\mathfrak{H}-a} = \text{ran}\, R(a)^{\frac{1}{2}} + \mathfrak{H}\_{\mathfrak{H}\_{\mathbb{S}^\*}-a}.\tag{5.3.13}$$

The connection between the decompositions in (5.3.12) and (5.3.13) is discussed in the next proposition.

**Proposition 5.3.7.** Let S be a semibounded relation in H, let S<sup>F</sup> be its Friedrichs extension, and let H be a semibounded self-adjoint extension of S with lower bound m(H). Furthermore, let a<m(H) and let R(a) be the nonnegative operator in (5.3.9). Then

$$\mathfrak{H}\_{\mathfrak{t}\_H - a} \ominus\_{\mathfrak{t}\_H - a} \mathfrak{H}\_{\mathfrak{t}\_{\mathfrak{F}\_\mathbf{F}} - a} = \ker \left( S^\* - a \right) \cap \mathfrak{H}\_{\mathfrak{t}\_H - a} = \text{ran} \, R(a)^{\frac{1}{2}},\tag{5.3.14}$$

and, consequently, the Hilbert space H<sup>t</sup>H−<sup>a</sup> has the orthogonal decomposition

$$\begin{split} \mathfrak{H}\_{\mathsf{t}\_{H}-a} &= \left( \ker \left( S^\* - a \right) \cap \mathfrak{H}\_{\mathsf{t}\_{H}-a} \right) \oplus\_{\mathsf{t}\_{H}-a} \mathfrak{H}\_{\mathsf{t}\_{\mathsf{S}\_{\mathsf{F}}}-a} \\ &= \text{ran} \, R(a)^{\frac{1}{2}} \oplus\_{\mathsf{t}\_{H}-a} \mathfrak{H}\_{\mathsf{t}\_{\mathsf{S}\_{\mathsf{F}}}-a} . \end{split} \tag{5.3.15}$$

In particular, the sum decomposition in (5.3.8) is direct for every a<m(H).

Proof. First the identity

$$\mathfrak{H}\_{\mathfrak{t}\_H - a} \ominus\_{\mathfrak{t}\_H - a} \mathfrak{H}\_{\mathfrak{t}\_{\mathfrak{S}\_H} - a} = \ker \left( S^\* - a \right) \cap \mathfrak{H}\_{\mathfrak{t}\_H - a} \tag{5.3.16}$$

in (5.3.14) will be shown. Recall from Lemma 5.1.17 and Theorem 5.1.12 that HtS−<sup>a</sup> is a dense subspace of HtSF−a, and hence it suffices to verify that

$$\mathfrak{H}\_{\mathfrak{t}\_H - a} \ominus\_{\mathfrak{t}\_H - a} \mathfrak{H}\_{\mathfrak{t}\_S - a} = \ker \left( S^\* - a \right) \cap \mathfrak{H}\_{\mathfrak{t}\_H - a}. \tag{5.3.17}$$

Assume first that g belongs to the left-hand side of (5.3.17). Then (f,g)tH−<sup>a</sup> = 0 for all f ∈ dom S. Hence,

$$0 = (f,g)\_{\mathfrak{t}\_H - a} = \mathfrak{t}\_H[f,g] - a(f,g) = (f',g) - a(f,g) = (f' - af, g)\_{\mathfrak{t}\_H}$$

for all {f,f- } ∈ S ⊂ H, where in the third equality the first representation theorem was used. This implies g ∈ (ran (S − a))<sup>⊥</sup> ∩ HtH−<sup>a</sup> = ker (S<sup>∗</sup> − a) ∩ HtH−a. Conversely, assume that g ∈ ker (S<sup>∗</sup> − a) ∩ HtH−a. Then the same reasoning as above shows that g belongs to the left-hand side of (5.3.17).

In order to prove the second equality in (5.3.14), one first shows that

$$\operatorname{ran} R(a)^{\frac{1}{2}} \subset \ker \left( S^\* - a \right). \tag{5.3.18}$$

To see this, let {f,f- } ∈ S. Then {f,f- } ∈ <sup>H</sup>∩S<sup>F</sup> and hence (H−a)−1(f- −af) = f and (S<sup>F</sup> <sup>−</sup> <sup>a</sup>)−1(f-− af) = f, which implies that for h ∈ H

$$(f' - af, R(a)h) = \left( (H - a)^{-1}(f' - af) - (S\_\mathcal{F} - a)^{-1}(f' - af), h \right) = 0.$$

Therefore,

$$\text{ran}\,R(a) \subset (\text{ran}\,(S - a))^\perp = \text{ker}\,(S^\* - a)^\perp$$

and hence also ran R(a) ⊂ ker (S<sup>∗</sup> − a). Moreover, since R(a) is a nonnegative self-adjoint operator it follows from Corollary D.7 that

$$\operatorname{ran} R(a) \subset \operatorname{ran} R(a)^{\frac{1}{2}} \subset \overline{\operatorname{ran}} R(a)^{\frac{1}{2}} = \overline{\operatorname{ran}} R(a) \subset \ker \left( S^\* - a \right), \tag{5.3.19}$$

which shows (5.3.18). By (5.3.8), one has ran R(a) 1 <sup>2</sup> ⊂ dom (H − a) 1 <sup>2</sup> = HtH−a. Together with (5.3.19) one concludes that ran R(a) 1 <sup>2</sup> ⊂ ker (S<sup>∗</sup> −a)∩HtH−a. From (5.3.16) it is clear that

$$\operatorname{ran} R(a)^{\natural} \subset \mathfrak{H}\_{\mathfrak{t}\_H - a} \ominus\_{\mathfrak{t}\_H - a} \mathfrak{H}\_{\mathfrak{t}\_S - a} \dots$$

Comparing (5.3.8) with (5.3.12), one then concludes that

$$\text{ran}\,R(a)^{\frac{1}{2}} = \mathfrak{H}\_{\mathfrak{t}\_H - a} \ominus\_{\mathfrak{t}\_H - a} \mathfrak{H}\_{\mathfrak{t}\_S - a} \dots$$

Together with (5.3.16) the identities in (5.3.14) follow. Furthermore, the above reasoning shows that the sum decomposition in (5.3.8) is direct. -

The semibounded self-adjoint extensions H of S for which the subspace ker (S<sup>∗</sup> − a) ∩ H<sup>t</sup>H−<sup>a</sup> is not a proper subset of ker (S<sup>∗</sup> − a) are of special interest. In fact, they coincide with the semibounded self-adjoint extensions H for which H and S<sup>F</sup> are transversal.

**Theorem 5.3.8.** Let S be a semibounded relation in H and let H be a semibounded self-adjoint extension of S. Then the following statements are equivalent:


Proof. (i) ⇔ (ii) In general, the self-adjoint extensions H and S<sup>F</sup> are transversal if and only if for some, and hence for all a<m(H)

$$\text{ran}\,R(a) = \ker\left(S^\*-a\right),$$

where <sup>R</sup>(a)=(<sup>H</sup> <sup>−</sup> <sup>a</sup>)−<sup>1</sup> <sup>−</sup> (S<sup>F</sup> <sup>−</sup> <sup>a</sup>)−1, which follows from Theorem 1.7.8. Since R(a) is a nonnegative self-adjoint operator and since ker (S<sup>∗</sup> −a) is closed, the last statement is equivalent to

$$\text{ran}\,R(a)^{\frac{1}{2}} = \text{ker}\,(S^\*-a);$$

cf. Corollary D.7. It follows from Proposition 5.3.7 that this condition is the same as

$$\ker\left(S^\*-a\right) \cap \text{dom}\left(H-a\right)^{\frac{1}{2}} = \ker\left(S^\*-a\right).$$

(ii) ⇒ (iii) This follows immediately from the direct sum decomposition

$$\operatorname{dom}\left(H - a\right)^{\frac{1}{2}} = \left(\ker\left(S^\* - a\right) \cap \operatorname{dom}\left(H - a\right)^{\frac{1}{2}}\right) + \operatorname{dom}\left(S\_\mathcal{F} - a\right)^{\frac{1}{2}};$$

cf. (5.3.8) and Proposition 5.3.7.

(iii) ⇒ (ii) This implication is trivial.

(i) <sup>⇒</sup> (iv) The identity <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>S</sup><sup>F</sup> <sup>+</sup> <sup>H</sup> shows that

$$
\text{dom}\,S^\* = \text{dom}\,S\_\mathcal{F} + \text{dom}\,H,
$$

and note that dom H and dom S<sup>F</sup> are subsets of dom (H − a) 1 2 .

(iv) <sup>⇒</sup> (ii) This is clear. -

The following result is a consequence of Theorem 5.3.8; it describes the behavior of any semibounded self-adjoint extension H of S in the presence of a semibounded self-adjoint extension H such that H and S<sup>F</sup> are transversal.

**Corollary 5.3.9.** Let S be a semibounded relation in H and let H be a semibounded self-adjoint extension of S such that H and S<sup>F</sup> are transversal. Then every semibounded self-adjoint extension Hof S satisfies

$$\dim\left(H'-a\right)^{\frac{1}{2}} \subset \text{dom}\left(H-a\right)^{\frac{1}{2}}, \quad a < \min\left\{m(H), m(H')\right\},\tag{5.3.20}$$

in which case there exists C > 0 such that

$$\|(H\_{\rm op} - a)^{\frac{1}{2}}\varphi\| \le C \|((H')\_{\rm op} - a)^{\frac{1}{2}}\varphi\|\tag{5.3.21}$$

for all ϕ ∈ dom (H- − a) 1 <sup>2</sup> . Moreover, there is equality in (5.3.20) if and only if Hand S<sup>F</sup> are transversal, in which case there exist c > 0 and C > 0 such that

$$\|c\|((H')\_{\rm op} - a)^{\frac{1}{2}}\varphi\| \le \|(H\_{\rm op} - a)^{\frac{1}{2}}\varphi\| \le C\|((H')\_{\rm op} - a)^{\frac{1}{2}}\varphi\|\tag{5.3.22}$$

for all ϕ ∈ dom (H- − a) 1 <sup>2</sup> = dom (H − a) 1 2 .

Proof. By Theorem 5.3.8, the semibounded self-adjoint extension H of S satisfies all of the equivalent conditions (i)–(iv) in Theorem 5.3.8. Let H be another semibounded self-adjoint extension of S. Applying Proposition 5.3.7 to H one sees that for a<m(H- )

$$\operatorname{dom}\left(H'-a\right)^{\frac{1}{2}} = \left(\ker\left(S^\*-a\right)\cap\mathfrak{H}\_{\operatorname{t}\_{H'}-a}\right) + \operatorname{dom}\left(S\_{\mathcal{F}}-a\right)^{\frac{1}{2}}.$$

Choosing a < min {m(H), m(H- )} it follows from Theorem 5.3.8 (iii) for H that the inclusion (5.3.20) holds. The inequality (5.3.21) is a direct consequence of Proposition 1.5.11.

Assume that there is equality in (5.3.20). Then Theorem 5.3.8 (iii) implies that H and S<sup>F</sup> are transversal. Conversely, if H and S<sup>F</sup> are transversal, then it follows from Theorem 5.3.8 (iii) that there is equality in (5.3.20). If there is equality in (5.3.20), then (5.3.21) also holds for some c > 0 when H and H are interchanged. Thus, the inequalities in (5.3.22) hold. -

The extreme case of equality of H and S<sup>F</sup> is described in the following immediate corollary of Proposition 5.3.7 and (5.3.8).

**Corollary 5.3.10.** Let S be a semibounded relation in H and let H be a semibounded self-adjoint extension of S. Then H = S<sup>F</sup> if and only if

$$\ker\left(S^\*-a\right) \cap \text{dom}\left(H-a\right)^{\frac{1}{2}} = \{0\}$$

for some, and hence for all a<m(H).

In the next corollary the form corresponding to a semibounded self-adjoint extension H which is transversal to S<sup>F</sup> is specified.

**Corollary 5.3.11.** Let S be a semibounded relation in H, let H be a semibounded self-adjoint extension of S such that H and S<sup>F</sup> are transversal, and let t<sup>S</sup><sup>F</sup> and t<sup>H</sup> be the corresponding closed semibounded forms in H. Then

$$\operatorname{dom} \mathfrak{t}\_H = \ker \left( S^\* - a \right) \oplus\_{\mathfrak{t}\_H - a} \operatorname{dom} \mathfrak{t}\_{\mathbb{S}^p}, \qquad a < m(H), \tag{5.3.23}$$

and the restriction of t<sup>H</sup> to Na(S∗) = ker (S<sup>∗</sup> − a) is a closed form in Na(S∗) which is bounded from below by m(H) and represented by a bounded self-adjoint operator L<sup>a</sup> ∈ **B**(Na(S∗)). Furthermore, one has

$$\mathbf{t}\_H[f,g] - a(f,g) = \left( (L\_a - a)f\_a, g\_a \right) + \mathbf{t}\_{\mathrm{Sp}}[f\_\mathrm{F}, g\_\mathrm{F}] - a(f\_\mathrm{F}, g\_\mathrm{F}) \tag{5.3.24}$$

for all f = f<sup>a</sup> + fF, g = g<sup>a</sup> + g<sup>F</sup> ∈ dom tH, where fa, g<sup>a</sup> ∈ ker (S<sup>∗</sup> − a) and fF, g<sup>F</sup> ∈ dom tS<sup>F</sup> .

Proof. The orthogonal decomposition (5.3.23) of dom t<sup>H</sup> follows from (5.3.15) in Proposition 5.3.7 and Theorem 5.3.8 (iii). Since the restriction of the form t<sup>H</sup> to Na(S∗) is a closed form which is bounded from below by m(H), it follows from Theorem 5.1.18 that there exists a semibounded self-adjoint relation L<sup>a</sup> in Na(S∗) which represents this form. Moreover, it follows from Theorem 5.1.23 that Na(S∗) = dom (L<sup>a</sup> − x) 1 <sup>2</sup> , x ≤ m(H), and hence (L<sup>a</sup> − x) 1 <sup>2</sup> and L<sup>a</sup> are bounded self-adjoint operators defined on Na(S∗).

For f,g ∈ dom t<sup>H</sup> decomposed, with respect to (5.3.23), in

$$f = f\_a + f\_\mathcal{F} \quad \text{and} \quad g = g\_a + g\_\mathcal{F},$$

where fa, g<sup>a</sup> ∈ ker (S<sup>∗</sup> − a) and fF, g<sup>F</sup> ∈ dom tS<sup>F</sup> , one has (fa, gF)tH−<sup>a</sup> = 0 and (fF, ga)tH−<sup>a</sup> = 0, and hence

$$\begin{aligned} \mathfrak{t}\_H[f,g] - a(f,g) &= (f,g)\_{\mathfrak{t}\_H - a} = (f\_a, g\_a)\_{\mathfrak{t}\_H - a} + (f\_\mathcal{F}, g\_\mathcal{F})\_{\mathfrak{t}\_H - a} \\ &= \mathfrak{t}\_H[f\_a, g\_a] - a(f\_a, g\_a) + \mathfrak{t}\_\mathcal{S} \left[f\_\mathcal{F}, g\_\mathcal{F}\right] - a(f\_\mathcal{F}, g\_\mathcal{F}). \end{aligned}$$

Now (5.3.24) follows from tH[fa, ga]=(Lafa, ga). -

## **5.4 Semibounded self-adjoint extensions and their lower bounds**

Let S be a, not necessarily closed, semibounded relation in a Hilbert space H with lower bound m(S). The Friedrichs extension S<sup>F</sup> is a self-adjoint extension of S whose lower bound m(SF) is equal to the lower bound m(S) of S. If H is a semibounded self-adjoint extension of S, then necessarily m(H) ≤ m(S). In this section the so-called Kre˘ın type extensions SK,x of S will be introduced. They can be viewed as generalizations of the Kre˘ın–von Neumann extension of a nonnegative symmetric operator or relation. The Kre˘ın type extension SK,x can be used to describe all semibounded self-adjoint extensions H of S whose lower bound satisfies m(H) ∈ [x, m(S)] when x ≤ m(S).

Let S be a semibounded relation in H with lower bound γ = m(S). It is clear that <sup>x</sup> <sup>≤</sup> <sup>γ</sup> implies that <sup>S</sup> <sup>−</sup> <sup>x</sup> <sup>≥</sup> 0. Hence, for <sup>x</sup> <sup>≤</sup> <sup>γ</sup> the relation (<sup>S</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> is nonnegative and one can define the Friedrichs extension ((<sup>S</sup> <sup>−</sup>x)−<sup>1</sup>)<sup>F</sup> of (<sup>S</sup> <sup>−</sup>x)−<sup>1</sup>, which is nonnegative; cf. Definition 5.3.2.

**Lemma 5.4.1.** Let S be a semibounded relation in H with lower bound γ. For x ≤ γ the relation SK,x defined by

$$S\_{\mathcal{K},x} := \left( ((S-x)^{-1})\_{\mathcal{F}} \right)^{-1} + x \tag{5.4.1}$$

is a semibounded self-adjoint extension of S with lower bound m(SK,x) = x. Moreover, SK,x = (S)K,x for x ≤ γ and

$$S \subset S \xrightarrow{\cdot} \dot{\mathfrak{N}}\_x(S^\*) \subset \overline{S} \xrightarrow{\cdot} \dot{\mathfrak{N}}\_x(S^\*) \subset S\_{\mathbf{K},x}, \quad x \le \gamma,\tag{5.4.2}$$

while, in particular,

$$\hat{S} \stackrel{\frown}{+} \dot{\mathfrak{N}}\_{x}(S^{\*}) = S\_{\mathcal{K},x}, \quad x \prec \gamma. \tag{5.4.3}$$

Proof. Since for <sup>x</sup> <sup>≤</sup> <sup>γ</sup> the Friedrichs extension ((<sup>S</sup> <sup>−</sup> <sup>x</sup>)−1)<sup>F</sup> of the nonnegative relation (<sup>S</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> is nonnegative, it follows that

$$\left( ((S-x)^{-1})\_{\mathcal{F}} \right)^{-1}$$

is a nonnegative self-adjoint extension of S − x. Hence, SK,x defined by (5.4.1) is a self-adjoint extension of S and, clearly,

$$m(S\_{\mathcal{K},x}) \ge x, \quad x \le \gamma. \tag{5.4.4}$$

Since the closure of (<sup>S</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> is given by (<sup>S</sup> <sup>−</sup> <sup>x</sup>)−1, it follows from Lemma 5.3.1, with <sup>S</sup> replaced by (<sup>S</sup> <sup>−</sup> <sup>x</sup>)−1, that

$$((
\overline{S} - x)^{-1})\_{\mathcal{F}} = ((S - x)^{-1})\_{\mathcal{F}},$$

which leads to SK,x = (S)K,x for x ≤ γ.

Let x ≤ γ and note that the first and second inclusion in (5.4.2) are clear. It is also clear that S ⊂ S ⊂ SK,x; thus, to show the third inclusion in (5.4.2) it suffices to check that <sup>N</sup> <sup>x</sup>(S∗) <sup>⊂</sup> <sup>S</sup>K,x. Set <sup>T</sup> = (<sup>S</sup> <sup>−</sup>x)−1, so that <sup>T</sup> is nonnegative and mul T <sup>∗</sup> = Nx(S∗). By Lemma 5.3.1, {0} × mul T <sup>∗</sup> ⊂ T<sup>F</sup> or, equivalently,

$$\{0\} \times \mathfrak{N}\_x(S^\*) \subset ((S - x)^{-1})\_{\mathbb{F}} \quad \text{or} \quad \mathfrak{N}\_x(S^\*) \times \{0\} \subset ((S - x)^{-1})\_{\mathbb{F}})^{-1}.$$

Thus, <sup>N</sup> <sup>x</sup>(S∗) <sup>⊂</sup> <sup>S</sup>K,x which completes the argument.

For x<γ it follows from Proposition 1.4.6 and Lemma 1.2.2 that ran (S −x) is closed. Hence, the relation <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>x</sup>(S∗) is self-adjoint, cf. Lemma 1.5.7, and thus the equality (5.4.3) prevails.

It remains to show that m(SK,x) = x. When x<γ one concludes this from (5.4.3). When x = γ observe that S ⊂ SK,γ implies m(SK,γ) ≤ m(S) = γ. On the other hand, from (5.4.4) it follows that <sup>m</sup>(SK,γ) <sup>≥</sup> <sup>γ</sup>. -

**Definition 5.4.2.** Let S be a semibounded relation in H with lower bound γ and let x ≤ γ. The semibounded self-adjoint extensions SK,x in (5.4.1) are called Kre˘ın type extensions of S. In the case γ ≥ 0 the nonnegative self-adjoint extension SK,<sup>0</sup> is called the Kre˘ın–von Neumann extension of S.

The definition of the Kre˘ın type extensions SK,x in (5.4.1) incorporates the lower bound <sup>m</sup>(S) = <sup>γ</sup> of <sup>S</sup>. Note that <sup>m</sup>(<sup>S</sup> <sup>−</sup> <sup>x</sup>) = <sup>γ</sup> <sup>−</sup> <sup>x</sup> for any <sup>x</sup> <sup>∈</sup> <sup>R</sup>, so that (S − x)K,γ−<sup>x</sup> is well defined, and

$$((S-x)\_{\mathbb{K},\gamma-x} = \left(\left(((S-x)-(\gamma-x))^{-1}\right)\_{\mathbb{F}}\right)^{-1} + \gamma - x, \quad x \in \mathbb{R},$$

which leads to

$$(S - x)\_{\mathcal{K}, \gamma - x} = S\_{\mathcal{K}, \gamma} - x, \quad x \in \mathbb{R}. \tag{5.4.5}$$

The identity (5.4.5) is the analog for the Kre˘ın type extension SK,γ of the shift invariance property (5.3.4) of SF. There are some more useful identities involving Kre˘ın type extensions of S. First note the simple equality

$$\left(S + \widehat{\mathfrak{N}}\_x(S^\*)\right)^{-1} = S^{-1} + \widehat{\mathfrak{N}}\_{1/x}(S^{-\*}), \quad x \in \mathbb{R} \; | \; \{0\}.\tag{5.4.6}$$

If <sup>S</sup> <sup>≥</sup> 0 or, equivalently, <sup>S</sup>−<sup>1</sup> <sup>≥</sup> 0, then it follows from (5.4.6) and Lemma 5.4.1 that

$$(S\_{\mathcal{K},x})^{-1} = (S^{-1})\_{\mathcal{K},1/x}, \quad x < 0,\tag{5.4.7}$$

since 0 <sup>≤</sup> min {m(S), m(S−1)}. In particular, one sees from (5.4.7) that

$$\left(\left(S-\gamma\right)\_{\mathbf{K},x}\right)^{-1} = \left(\left(S-\gamma\right)^{-1}\right)\_{\mathbf{K},1/x}, \quad x<0. \tag{5.4.8}$$

Returning to the general case where S has lower bound γ, note the simple equality

$$\left( (S \xrightarrow{\frown} \widehat{\mathfrak{N}}\_{a+x}(S^\*) \right) - a = (S - a) \xrightarrow{\frown} \widehat{\mathfrak{N}}\_x((S - a)^\*), \quad a, x \in \mathbb{R}.$$

For a + x<γ this implies

$$S\_{\mathcal{K},a+x} - a = (S - a)\_{\mathcal{K},x},\tag{5.4.9}$$

and taking a = γ in (5.4.9) gives an analog of (5.4.5):

$$S\_{\mathcal{K}, \gamma + x} - \gamma = (S - \gamma)\_{\mathcal{K}, x}, \quad x < 0. \tag{5.4.10}$$

By Lemma 5.4.1, the Kre˘ın type extensions SK,x , x ≤ γ, are semibounded self-adjoint extensions of S. As restrictions of S<sup>∗</sup> the Kre˘ın type extensions can be characterized in a similar way as the Friedrichs extension SF; cf. Theorem 5.3.3.

**Theorem 5.4.3.** Let S be a semibounded relation in H with lower bound γ. Then for each x ≤ γ the Kre˘ın type extension SK,x of S has the representation

$$\mathcal{S}\_{\mathcal{K},x} = \left\{ \{f, f'\} \in S^\* \,:\, f' - xf \in \text{dom}\,\mathfrak{t}\_{((S-x)^{-1})^\mathbb{R}} \right\} \tag{5.4.11}$$

with ker (SK,x − x) = ker (S<sup>∗</sup> − x). Furthermore, if H is a self-adjoint extension of S, which is not necessarily semibounded, then

$$\text{ran}\,(H-x)\subset\text{dom}\,\mathfrak{t}\_{((S-x)^{-1})\mathbb{K}}\quad\Rightarrow\quad H=S\_{\mathbb{K},x}.$$

Proof. Let <sup>x</sup> <sup>≤</sup> <sup>γ</sup>. Then by definition one has (SK,x <sup>−</sup> <sup>x</sup>)−<sup>1</sup> = ((<sup>S</sup> <sup>−</sup> <sup>x</sup>)−1)<sup>F</sup> and {f,f- } ∈ SK,x if and only if {f- <sup>−</sup> xf, f} ∈ (SK,x <sup>−</sup> <sup>x</sup>)−1. Similarly, {f,f- } ∈ S<sup>∗</sup> if and only if {f- <sup>−</sup> xf, f} ∈ (S<sup>∗</sup> <sup>−</sup> <sup>x</sup>)−1. Hence, the description (5.4.11) follows from the representation of ((<sup>S</sup> <sup>−</sup> <sup>x</sup>)−1)<sup>F</sup> in Theorem 5.3.3 (with <sup>S</sup> now replaced by (<sup>S</sup> <sup>−</sup> <sup>x</sup>)−1). Likewise,

$$\ker\left(S\_{\mathcal{K},x} - x\right) = \text{mul}\left((S - x)^{-1}\right)\_{\mathcal{F}} = \text{mul}\left(S^\* - x\right)^{-1} = \ker\left(S^\* - x\right).$$

The last item also follows from Theorem 5.3.3. -

There is also an approximation of SK,x by elements in S as in Corollary 5.3.4; in particular, this gives a useful description of mul SK,x.

**Corollary 5.4.4.** Let S be a semibounded relation in H with lower bound γ. Then SK,x, x ≤ γ, is the set of all elements {f,f- } ∈ S<sup>∗</sup> for which there exists a sequence ({fn, f- <sup>n</sup>}) in S such that

$$f\_n' - xf\_n \to f' - xf \quad \text{and} \quad (f\_n, f\_n' - xf\_n) \to (f, f' - xf).$$

In particular, mul SK,x is the set of all elements f- ∈ mul S<sup>∗</sup> for which there exists a sequence ({fn, f- <sup>n</sup>}) in S such that

$$f\_n' - xf\_n \to f' \quad \text{and} \quad (f\_n, f\_n' - xf\_n) \to 0.$$

As in the case of the Friedrichs extension, the Kre˘ın type extension SK,γ can sometimes be explicitly given in terms of S and an eigenspace of S∗; cf. Lemma 1.5.7. The following result is the analog of Corollary 5.3.5. The special case where ran (S − γ) is closed is particularly useful.

**Corollary 5.4.5.** Let S be a semibounded relation with lower bound γ. Then

$$S\_{\mathcal{K},\gamma} = S \stackrel{\frown}{+} \dot{\mathfrak{M}}\_{\gamma}(S^\*)\tag{5.4.12}$$

if and only if

$$\text{ran}\left(S - \gamma\right) = \overline{\text{ran}}\left(S - \gamma\right) \cap \text{ran}\left(S^\* - \gamma\right).$$

In particular, if ran (S − γ) is closed, then SK,γ has the form (5.4.12).

The semibounded self-adjoint extensions SK,x with x ≤ γ become extremal extensions of S when a lower bound x ≤ γ for semibounded self-adjoint extensions of S is prescribed.

**Theorem 5.4.6.** Let S be a semibounded relation in H with lower bound γ. Let x ≤ γ be fixed and let H be a semibounded self-adjoint relation in H. Then the following equivalence holds:

$$S \subset H \quad \text{and} \quad x \le m(H) \quad \Leftrightarrow \quad S\_{\mathcal{K},x} \le H \le S\_{\mathcal{F}}.\tag{5.4.13}$$

In particular, the class of semibounded self-adjoint extensions of S preserving the lower bound of S is characterized by

$$S \subset H \quad \text{and} \quad \gamma = m(H) \quad \Leftrightarrow \quad S\_{\mathcal{K}, \gamma} \le H \le S\_{\mathcal{F}}.\tag{5.4.14}$$

In fact, SK,x ≤ H ≤ SF, x ≤ γ, implies that S ⊂ (S<sup>F</sup> ∩ SK,x) ⊂ H.

Proof. (⇒) Assume that H is a semibounded self-adjoint extension of S with lower bound m(H) ≥ x. Then clearly S −x ⊂ H −x and here both sides are nonnegative relations. But then also (<sup>S</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> <sup>⊂</sup> (<sup>H</sup> <sup>−</sup> <sup>x</sup>)−1, where both sides are nonnegative relations. Applying Proposition 5.3.6 to the relation (<sup>S</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> one obtains

$$(H - x)^{-1} \le ((S - x)^{-1})\_{\mathcal{F}}.$$

Since these relations are nonnegative, the antitonicity property of the inverse in Corollary 5.2.8 gives the inequality

$$\left( ((S-x)^{-1})\_{\mathcal{F}} \right)^{-1} \le H - x$$

or, equivalently, SK,x ≤ H. The inequality H ≤ S<sup>F</sup> holds by Proposition 5.3.6.

(⇐) Let H be a semibounded self-adjoint relation such that SK,x ≤ H ≤ SF. Then m(SK,x) ≤ m(H) ≤ m(SF) by Lemma 5.2.5 (ii) and, since m(SK,x) = x and m(SF) = γ, one concludes that H is semibounded with x ≤ m(H) ≤ γ.

It remains to show that H is an extension of S. With a<x the assumption on H is equivalent to

$$\left( (S\_{\mathcal{F}} - a)^{-1} h, h \right) \le \left( (H - a)^{-1} h, h \right) \le \left( (S\_{\mathcal{K}, x} - a)^{-1} h, h \right), \quad h \in \mathfrak{H}; \tag{5.4.15}$$

cf. Proposition 5.2.7. For <sup>R</sup>(a)=(<sup>H</sup> <sup>−</sup> <sup>a</sup>)−<sup>1</sup> <sup>−</sup> (S<sup>F</sup> <sup>−</sup> <sup>a</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) it follows from (5.4.15) that

$$0 \le \left( R(a)h, h \right) \le \left( (S\_{\mathcal{K}, x} - a)^{-1} h, h \right) - \left( (S\_{\mathcal{F}} - a)^{-1} h, h \right), \quad h \in \mathfrak{H}.\tag{5.4.16}$$

Now, let {f,f- } ∈ S and define h = f- − af. Then h ∈ ran (S − a) and {h, f} ∈ (<sup>S</sup> <sup>−</sup> <sup>a</sup>)−1, and hence

$$\{h, f\} \in \left(S\_{\mathcal{F}} - a\right)^{-1} \cap \left(S\_{\mathbb{K}, x} - a\right)^{-1},$$

so that (S<sup>F</sup> <sup>−</sup> <sup>a</sup>)−1<sup>h</sup> = (SK,x <sup>−</sup> <sup>a</sup>)−1h. It follows from (5.4.16) that (R(a)h, h) = 0, and since R(a) ≥ 0, one concludes that R(a)h = 0 for all h ∈ ran (S − a).

In other words,

$$(H - a)^{-1}(f' - af) = (S\_\mathcal{F} - a)^{-1}(f' - af) = f, \quad \{f, f'\} \in S,$$

and it follows that {f,f- } ∈ H. This proves the claim S ⊂ H and completes the proof of (5.4.13). The inclusion <sup>S</sup><sup>F</sup> <sup>∩</sup> <sup>S</sup>K,x <sup>⊂</sup> <sup>H</sup> follows similarly. -

If S is nonnegative, then Theorem 5.4.6 shows that the Kre˘ın–von Neumann extension

$$S\_{\mathbf{K},0} = \left( (S^{-1})\_{\mathbf{F}} \right)^{-1} \tag{5.4.17}$$

in Definition 5.4.2 is the smallest nonnegative self-adjoint extension.

**Corollary 5.4.7.** Let S be a nonnegative relation in H and let H be a semibounded self-adjoint relation in H. Then the following equivalence holds:

S ⊂ H and m(H) ≥ 0 ⇔ SK,<sup>0</sup> ≤ H ≤ SF.

In fact, SK,<sup>0</sup> ≤ H ≤ SF, implies that S ⊂ (S<sup>F</sup> ∩ SK,0) ⊂ H.

For completeness it is observed that the inequalities SK,x ≤ H ≤ S<sup>F</sup> in (5.4.13) can also be expressed by inequalities for the corresponding forms:

$$\mathfrak{t}\_{\mathrm{S\!\_{\mathrm{K}}x}} \leq \mathfrak{t}\_{H} \leq \mathfrak{t}\_{\mathrm{S\!\_{\mathrm{P}}}},$$

thanks to Theorem 5.2.4. Recall from (5.3.7) that the inequality t<sup>H</sup> ≤ tS<sup>F</sup> is in fact equivalent to the inclusion tS<sup>F</sup> ⊂ tH.

It is clear from Lemma 5.3.1 or Theorem 5.3.3 that the Friedrichs extension S<sup>F</sup> is an operator if and only if S is densely defined, in which case all self-adjoint extensions of S are operators. If S is not densely defined, then S may not be closable as an operator, in which case all self-adjoint extensions of S are multivalued. The following result shows when semibounded self-adjoint operator extensions exist.

**Corollary 5.4.8.** Let S be a semibounded operator in H with lower bound γ. Then the following statements are equivalent:


If any of these statements hold, then S is a closable operator. Furthermore, the following statements are equivalent:


If any of these statements hold, then S is a bounded operator.

Proof. (i) ⇒ (ii) Let H be a semibounded self-adjoint operator extension of S. Then H is densely defined and mul H = {0}. If x = m(H), then Theorem 5.4.6 shows that SK,x ≤ H. Hence, mul SK,x ⊂ mul H = {0} by Lemma 5.2.5 (i), and therefore SK,x is an operator extension of S.


The inclusion S ⊂ H for a self-adjoint operator H shows that S is a closable operator; this holds when one of the equivalent conditions (i)–(iii) is satisfied.

(i- ) ⇒ (iii- ) Let H be a bounded self-adjoint operator which extends S and let m(H) = x. Then H ∈ **B**(H) and an application of the Cauchy–Schwarz inequality for the inner product ((H − x)·, ·) yields

$$\|(H - x)f\|^2 \le \|H - x\| \left(f, (H - x)f\right), \quad f \in \mathfrak{H}.$$

In particular, for f ∈ dom S one obtains (iii- ) with f- = Sf and M = H − x .

(iii- ) ⇒ (ii- ) Assume that (iii- ) holds for some x. To show (ii- ) let {f,f- } ∈ SK,x. Then according to Corollary 5.4.4 there exists a sequence ({fn, f- <sup>n</sup>}) in S such that f- <sup>n</sup> − xf<sup>n</sup> → f- − xf and (fn, f- <sup>n</sup> − xfn) → (f,f- − xf). By assumption f- <sup>n</sup> − xf<sup>n</sup> <sup>2</sup> <sup>≤</sup> <sup>M</sup>(fn, f- <sup>n</sup> − xfn) and taking limits gives

$$\|f' - xf\|^2 \le M(f, f' - xf) \le M\|f\|\|f' - xf\|.$$

Thus, if {f,f- } ∈ SK,x, then f- − xf ≤ M f , which gives mul SK,x = {0}. In addition, one now sees that SK,x − x is a bounded operator, and since SK,x is self-adjoint, it follows that SK,x ∈ **B**(H).

(ii- ) ⇒ (i- ) This is clear.

The last statement follows from any of the statements (i- ), (ii- ), and (iii- ). -

Let S be a semibounded relation in H with lower bound m(S) = γ. By means of Theorem 5.4.6 it will be shown that the mapping x → SK,x, x<γ, is nondecreasing.

**Corollary 5.4.9.** Let S be a semibounded relation in H with lower bound γ. Let x ≤ y<γ, then

$$S\_{\mathbb{K},x} \le S\_{\mathbb{K},y} \le S\_{\mathbb{K},\gamma} \le S\_{\mathbb{F}}.$$

Proof. By construction, SK,x and SK,y are semibounded self-adjoint extensions of S with lower bounds m(SK,x) = x and m(SK,y) = y. Hence, m(SK,x) ≤ m(SK,y) and an application of (5.4.13) in Theorem 5.4.6 gives SK,x ≤ SK,y. Similarly, m(SK,y) ≤ m(SK,γ) = γ leads to SK,y ≤ SK,γ. The inequality SK,γ ≤ S<sup>F</sup> also follows from Theorem 5.4.6. -

The Friedrichs extension S<sup>F</sup> and the Kre˘ın type extensions SK,x can be approximated in the strong resolvent sense by the semibounded self-adjoint relations SK,t with t ∈ (−∞, γ); cf. Theorem 5.2.11.

**Theorem 5.4.10.** Let S be a semibounded relation in H with lower bound γ. Then the Friedrichs extension S<sup>F</sup> is given by the strong resolvent limit

$$\left(S\_{\mathcal{F}} - \lambda\right)^{-1} h = \lim\_{t \downarrow -\infty} \left(S\_{\mathcal{K},t} - \lambda\right)^{-1} h, \quad h \in \mathfrak{H},\tag{5.4.18}$$

where <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [γ, <sup>∞</sup>), and for each <sup>x</sup> <sup>≤</sup> <sup>γ</sup> the Kre˘ın type extension <sup>S</sup>K,x is given by the strong resolvent limit

$$\left(\left(S\_{\mathbf{K},x} - \lambda\right)^{-1}h = \lim\_{t \uparrow x} \left(S\_{\mathbf{K},t} - \lambda\right)^{-1}h, \quad h \in \mathfrak{H},\tag{5.4.19}$$

where <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [x, <sup>∞</sup>).

Proof. First the result in (5.4.19) will be shown. Let x ≤ γ, let ε > 0 be arbitrary, and note that by Corollary 5.4.9

$$S\_{\mathcal{K},x-\varepsilon} \le S\_{\mathcal{K},t} \le S\_{\mathcal{K},x}, \quad x-\varepsilon \le t < x.$$

In particular, for t ∈ [x−ε, x) the relations SK,t are bounded from below by x−ε. By the monotonicity of SK,t and Theorem 5.2.11, the strong resolvent limit of SK,t as <sup>t</sup> <sup>↑</sup> <sup>x</sup> exists for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [<sup>x</sup> <sup>−</sup> ε, <sup>∞</sup>) and it is a semibounded self-adjoint relation S with x − ε ≤ t ≤ SK,t ≤ S- . It will now be shown that

$$S' = S\_{\mathcal{K},x}.\tag{5.4.20}$$

In fact, since SK,t ≤ SK,x, there is a common upper bound and hence one has S- ≤ SK,x; see Corollary 5.2.12 (i). As S ⊂ SK,t for all t<x, this implies that S ⊂ S by Corollary 5.2.12 (ii). Thus, S is a semibounded self-adjoint extension of S. Since m(SK,t) ≤ m(S- ) for all t<x, it follows that x ≤ m(S- ) and, hence SK,x ≤ S by Theorem 5.4.6. Combining S- ≤ SK,x and SK,x ≤ S- , it follows from Lemma 5.2.5 (iv) that (5.4.20) holds. This establishes (5.4.19) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>\[x−ε, <sup>∞</sup>). Since ε > 0 is arbitrary, one obtains (5.4.19).

Next, (5.4.18) will be shown. Apply the previous result (5.4.19) to the Kre˘ın– von Neumann extension ((<sup>S</sup> <sup>−</sup> <sup>γ</sup>)−1)K,<sup>0</sup> of the nonnegative relation (<sup>S</sup> <sup>−</sup> <sup>γ</sup>)−1:

$$\left\{ \left( (S - \gamma)^{-1} \right)\_{\mathcal{K}, 0} - \lambda \right\}^{-1} h = \lim\_{t \uparrow 0} \left\{ \left( (S - \gamma)^{-1} \right)\_{\mathcal{K}, t} - \lambda \right\}^{-1} h, \quad h \in \mathfrak{H}, \tag{5.4.21}$$

where <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>). Then it follows from (5.4.1) (with <sup>x</sup> = 0 and <sup>S</sup> replaced by (<sup>S</sup> <sup>−</sup> <sup>γ</sup>)−1) and the translation invariance property (5.3.4) of the Friedrichs extension that

$$((S - \gamma)^{-1})\_{\mathcal{K},0} = ((S - \gamma)\_{\mathcal{F}})^{-1} = (S\_{\mathcal{F}} - \gamma)^{-1}.\tag{5.4.22}$$

Likewise, using (5.4.8) and (5.4.10) one obtains for t < 0 that

$$((S - \gamma)^{-1})\_{\mathbf{K}, t} = ((S - \gamma)\_{\mathbf{K}, 1/t})^{-1} = (S\_{\mathbf{K}, \gamma + 1/t} - \gamma)^{-1}.\tag{5.4.23}$$

Substitute (5.4.22) and (5.4.23) into (5.4.21) and replace λ by 1/λ:

$$\left( (S\_{\mathcal{F}} - \gamma)^{-1} - \frac{1}{\lambda} \right)^{-1} h = \lim\_{t \uparrow 0} \left( (S\_{\mathcal{K}, \gamma + 1/t} - \gamma)^{-1} - \frac{1}{\lambda} \right)^{-1} h, \quad h \in \mathfrak{H}. \tag{5.4.24}$$

Now recall that for any relation H one has the identity

(H−<sup>1</sup> <sup>−</sup> <sup>1</sup>/λ) <sup>−</sup><sup>1</sup> <sup>=</sup> <sup>−</sup><sup>λ</sup> <sup>−</sup> <sup>λ</sup>2(<sup>H</sup> <sup>−</sup> <sup>λ</sup>) <sup>−</sup>1, λ = 0,

see Corollary 1.1.12. Therefore, (5.4.24) yields

$$\left(S\_{\mathcal{F}} - \gamma - \lambda\right)^{-1} h = \lim\_{t \uparrow 0} \left(S\_{\mathcal{K}, \gamma + 1/t} - \gamma - \lambda\right)^{-1},$$

where <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>), which is equivalent to (5.4.18). -

The next lemma and Proposition 5.4.12 below show that the convergence in (5.4.18) in Theorem 5.4.10 is uniform if the limit is a compact operator. First the case where the Kre˘ın–von Neumann extension SK,<sup>0</sup> is compact is treated. There is in general no analog of Lemma 5.4.11 for the other Kre˘ın type extensions SK,x, x = 0, since the eigenspace ker (SK,x − x) = ker (S<sup>∗</sup> − x) for the eigenvalue x ∈ σp(SK,x) is infinite-dimensional whenever the defect numbers of S are infinite. Hence, SK,x cannot be compact for x = 0.

**Lemma 5.4.11.** Let S be a bounded nonnegative operator in H and assume that the Kre˘ın–von Neumann extension SK,<sup>0</sup> is a compact operator. Then SK,t ∈ **B**(H) for t < 0 and

$$\lim\_{t \uparrow 0} \|S\_{\mathbf{K},t} - S\_{\mathbf{K},0} \| = 0.$$

Proof. Since SK,<sup>0</sup> is compact one has, in particular, SK,<sup>0</sup> ∈ **B**(H) and hence SK,t ∈ **B**(H) for t < 0; cf. Corollary 5.4.9 and Definition 5.2.3. By Theorem 5.4.10, the resolvents of SK,t converge in the strong sense to the resolvent of SK,<sup>0</sup> and since all operators belong to **B**(H) it follows that SK,t converges strongly to SK,0. In fact, strong resolvent convergence is equivalent to strong graph convergence by Corollary 1.9.6, and for operators in **B**(H) this implies strong convergence. Now it will be shown that this convergence is uniform. Since SK,<sup>0</sup> is compact by assumption, for ε > 0 one can choose an orthogonal projection P<sup>ε</sup> such that SK,0Pε < ε and I − P<sup>ε</sup> is a finite-rank operator. Then it follows that the finite-rank operators

$$(S\_{\mathcal{K},t} - S\_{\mathcal{K},0})(I - P\_{\varepsilon}) \quad \text{and} \quad (I - P\_{\varepsilon})(S\_{\mathcal{K},t} - S\_{\mathcal{K},0})P\_{\varepsilon} \tag{5.4.25}$$

tend to zero uniformly as t ↑ 0. For t < 0 one has 0 ≤ SK,t − t ≤ SK,<sup>0</sup> − t by Corollary 5.4.9, and hence

$$0 \le P\_{\varepsilon}(S\_{\mathbf{K},t} - t)P\_{\varepsilon} \le P\_{\varepsilon}(S\_{\mathbf{K},0} - t)P\_{\varepsilon}.$$

$$\mathbb{D}$$

This implies

$$\begin{aligned} \|\|P\_{\varepsilon}(S\_{\mathcal{K},t} - S\_{\mathcal{K},0})P\_{\varepsilon}\|\| &\leq \|\|P\_{\varepsilon}(S\_{\mathcal{K},t} - t)P\_{\varepsilon}\|\| + \|\|P\_{\varepsilon}(t - S\_{\mathcal{K},0})P\_{\varepsilon}\|\| \\ &\leq 2\|\|P\_{\varepsilon}(S\_{\mathcal{K},0} - t)P\_{\varepsilon}\|\| \\ &\leq 2\varepsilon + 2|t|, \end{aligned}$$

and now the assertion follows together with (5.4.25) and the estimate

$$\begin{split} \| |(S\_{\mathcal{K},t} - S\_{\mathcal{K},0})| \| \leq & \| (S\_{\mathcal{K},t} - S\_{\mathcal{K},0})(I - P\_{\varepsilon}) \| \\ &+ \| P\_{\varepsilon} (S\_{\mathcal{K},t} - S\_{\mathcal{K},0}) P\_{\varepsilon} \| + \| (I - P\_{\varepsilon}) (S\_{\mathcal{K},t} - S\_{\mathcal{K},0}) P\_{\varepsilon} \|. \end{split}$$

This completes the proof. -

The counterpart of Lemma 5.4.11 for the case where the Friedrichs extension S<sup>F</sup> has a compact resolvent is provided next.

**Proposition 5.4.12.** Let S be a semibounded relation in H with lower bound γ and assume that the resolvent (S<sup>F</sup> <sup>−</sup>λ)−<sup>1</sup> of the Friedrichs extension <sup>S</sup><sup>F</sup> is compact for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [γ, <sup>∞</sup>). Then

$$\lim\_{t \downarrow -\infty} \|(S\_{\mathcal{K},t} - \lambda)^{-1} - (S\_{\mathcal{F}} - \lambda)^{-1}\| = 0.$$

Proof. It follows from the resolvent identity (see Theorem 1.2.6) that the resolvent of <sup>S</sup><sup>F</sup> is compact for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [γ, <sup>∞</sup>) if it is compact for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [γ, <sup>∞</sup>). Now let <sup>x</sup><sup>0</sup> < γ and note that (<sup>S</sup> <sup>−</sup> <sup>x</sup>0)−<sup>1</sup> is a bounded nonnegative operator. By (5.4.17) and (5.3.4) one has

$$\left(\left(S - x\_0\right)^{-1}\right)\_{\mathcal{K},0} = \left(\left(S - x\_0\right)\_{\mathcal{F}}\right)^{-1} = \left(S\_{\mathcal{F}} - x\_0\right)^{-1},$$

which is a compact operator by assumption. From Lemma 5.4.11 it follows that ((<sup>S</sup> <sup>−</sup> <sup>x</sup>0)−1)K,x converge uniformly to ((<sup>S</sup> <sup>−</sup> <sup>x</sup>0)−1)K,<sup>0</sup> when <sup>x</sup> <sup>↑</sup> 0. This implies the assertion for λ = x0, since

$$\begin{aligned} \lim\_{x \uparrow 0} \left( (S - x\_0)^{-1} \right)\_{\mathbf{K}, x} &= \lim\_{t \downarrow -\infty} \left( (S - x\_0)^{-1} \right)\_{\mathbf{K}, 1/t} \\ &= \lim\_{t \downarrow -\infty} \left( (S - x\_0)\_{\mathbf{K}, t} \right)^{-1} \\ &= \lim\_{t \downarrow -\infty} \left( S\_{\mathbf{K}, t + x\_0} - x\_0 \right)^{-1} \\ &= \lim\_{t \downarrow -\infty} \left( S\_{\mathbf{K}, t} - x\_0 \right)^{-1}, \end{aligned}$$

where (5.4.7) was used in the second equality and (5.4.9) was used in the third equality. The general case <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [γ, <sup>∞</sup>) follows with Lemma 1.11.4. -

Let S be a closed semibounded relation in H with lower bound γ and let x<γ. Then x ∈ ρ(SF) is a point of regular type of S and one has that

$$S^\* = S\_\mathcal{F} \stackrel{\frown}{\rightarrow} \mathfrak{N}\_x(S^\*) \quad \text{and} \quad S\_{\mathcal{K},x} = S \stackrel{\frown}{\rightarrow} \mathfrak{N}\_x(S^\*);$$

cf. Theorem 1.7.1 and (5.4.3). Therefore, it is clear that

$$S\_{\mathcal{K},x} \stackrel{\cdot}{+} S\_{\mathcal{F}} = S^\*, \quad x < \gamma; \tag{5.4.26}$$

in other words, the extensions SK,x and S<sup>F</sup> are transversal for x<γ. For x = γ the situation is different. Now it is possible that the extensions SK,γ and S<sup>F</sup> are transversal, but it is also possible that they are not transversal because, for instance, the extensions SK,γ and S<sup>F</sup> may even coincide. First the case of transversality is discussed.

**Corollary 5.4.13.** Let S be a semibounded relation in H with lower bound γ. Then the following statements hold:


Proof. (i) This statement follows from Theorem 5.3.8.

(ii) Assume that S<sup>F</sup> and SK,γ are transversal and that S is a bounded operator. Then part (i) shows that

$$\operatorname{dom} S^\* \subset \operatorname{dom} \left( S\_{\mathcal{K}, \gamma} - \gamma \right)^{\frac{1}{2}}.\tag{5.4.27}$$

Since S∗∗ is a bounded closed operator, dom S∗∗ is closed, and hence so is dom S∗; see Theorem 1.3.5. Moreover, (dom S∗)<sup>⊥</sup> = mul S∗∗ = {0} implies that dom S<sup>∗</sup> is dense in H, so that dom S<sup>∗</sup> = H. Then it follows from (5.4.27) that

$$\text{dom}\,(S\_{\mathbb{K},\gamma} - \gamma)^{\frac{1}{2}} = \mathfrak{H}.$$

Therefore, dom SK,γ = H and hence SK,γ ∈ **B**(H).

Conversely, assume that SK,γ ∈ **B**(H). Then also S is bounded. Moreover, dom S<sup>∗</sup> ⊂ H = dom (SK,γ − γ) 1 <sup>2</sup> , which together with (i) shows that S<sup>F</sup> and SK,γ are transversal. -

The extreme case of equality of SK,γ and S<sup>F</sup> is described in the following corollary.

**Corollary 5.4.14.** Let S be a semibounded relation in H with lower bound γ. Then the following statements hold:


Proof. (i) This statement follows from Corollary 5.3.10.

(ii) Assume that S<sup>F</sup> = SK,γ and SK,γ ∈ **B**(H). The assumption SK,γ ∈ **B**(H) and Corollary 5.4.13 (ii) imply that S<sup>F</sup> and SK,γ are transversal and S is bounded. Furthermore, <sup>S</sup><sup>F</sup> <sup>=</sup> <sup>S</sup>K,γ implies that <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>S</sup><sup>F</sup> <sup>+</sup> <sup>S</sup>K,γ <sup>=</sup> <sup>S</sup>K,γ, so that <sup>S</sup> <sup>=</sup> <sup>S</sup>K,γ is a self-adjoint operator in **B**(H).

Conversely, if S ∈ **B**(H), then S is the only self-adjoint extension of S and hence <sup>S</sup><sup>F</sup> <sup>=</sup> <sup>S</sup>K,γ <sup>=</sup> <sup>S</sup>∗∗ <sup>∈</sup> **<sup>B</sup>**(H). -

In the next corollary, which is a special version of Corollary 5.3.11, the form tSK,x for x<γ corresponding to the Kre˘ın type extensions SK,x is expressed in terms of the Friedrichs form tS<sup>F</sup> .

**Corollary 5.4.15.** Let S be a semibounded relation in H with lower bound γ, let x<γ, and let tSK,x be the form corresponding to SK,x. Then

$$\operatorname{dom} \mathfrak{t}\_{\mathcal{S}\_{K,x}} = \ker \left( S^\* - a \right) \oplus\_{\mathfrak{t}\_{\mathcal{S}\_{K,x}}} \operatorname{dom} \mathfrak{t}\_{\mathcal{S}\_F}, \qquad a < x,\tag{5.4.28}$$

and the restriction of tSK,x to Na(S∗) = ker (S<sup>∗</sup> −a) is represented by the bounded self-adjoint operator

$$L\_a = P\_{\mathfrak{N}\_a(S^\*)} \left( x + (x - a)^2 (S\_F - x)^{-1} \right) \iota\_{\mathfrak{N}\_a(S^\*)} \in \mathbf{B}(\mathfrak{N}\_a(S^\*)),\tag{5.4.29}$$

where ιNa(S∗) is the canonical embedding of Na(S∗) into H and PNa(S∗) is the orthogonal projection onto Na(S∗). Furthermore,

$$\begin{split} \mathfrak{t}\_{\mathcal{S}\mathcal{K},x}[f,g] - a(f,g) &= (x-a) \left( \left( I + (x-a)(S\_{\mathcal{F}} - x)^{-1} \right) f\_a, g\_a \right) \\ &+ \mathfrak{t}\_{\mathcal{S}\_{\mathcal{F}}}[f\_{\mathcal{F}}, g\_{\mathcal{F}}] - a(f\_{\mathcal{F}}, g\_{\mathcal{F}}) \end{split} \tag{5.4.30}$$

holds for all f = f<sup>a</sup> + fF, g = g<sup>a</sup> + g<sup>F</sup> ∈ dom tSK,x , where fa, g<sup>a</sup> ∈ ker (S<sup>∗</sup> − a) and fF, g<sup>F</sup> ∈ dom tS<sup>F</sup> .

Proof. The decomposition (5.4.28) is clear from Corollary 5.3.11, since SK,x and S<sup>F</sup> are transversal for x<γ. Next it will be shown that the representing operator for the restriction of tSK,x to Na(S∗) is given by (5.4.29); then (5.3.24) in Corollary 5.3.11 also leads to (5.4.30).

In order to verify (5.4.29) consider fa, g<sup>a</sup> ∈ Na(S∗) and let

$$f\_x = \left(I + (x - a)(S\_\mathcal{F} - x)^{-1}\right)f\_a.$$

Then <sup>f</sup><sup>x</sup> <sup>∈</sup> <sup>N</sup>x(S∗) and <sup>f</sup><sup>a</sup> = (<sup>I</sup> + (<sup>a</sup> <sup>−</sup> <sup>x</sup>)(S<sup>F</sup> <sup>−</sup> <sup>a</sup>)−1)f<sup>x</sup> by Lemma 1.4.10. Moreover, since SK,x is representing the form tSK,x and f<sup>x</sup> ∈ dom SK,x one has <sup>t</sup>SK,x [fx, ga]=(xfx, ga). Using (S<sup>F</sup> <sup>−</sup> <sup>a</sup>)−1f<sup>x</sup> <sup>∈</sup> dom <sup>t</sup>S<sup>F</sup> and <sup>g</sup><sup>a</sup> <sup>∈</sup> <sup>N</sup>a(S∗) the orthogonal decomposition (5.4.28) yields ((S<sup>F</sup> <sup>−</sup> <sup>a</sup>)−1fx, ga)tSK,x−<sup>a</sup> = 0.

Now one computes

$$\begin{aligned} \mathbf{t}\_{\mathcal{S}\mathcal{K},x}[f\_a, g\_a] &= \mathbf{t}\_{\mathcal{S}\mathcal{K},x}[f\_x, g\_a] + (a-x)\mathbf{t}\_{\mathcal{S}\mathcal{K},x}[(S\_\mathcal{F} - a)^{-1}f\_x, g\_a] \\ &= (xf\_x, g\_a) + (a-x)a\left((S\_\mathcal{F} - a)^{-1}f\_x, g\_a\right) \\ &= \left(xf\_x + a(f\_a - f\_x), g\_a\right) \\ &= \left((x-a)\left(I + (x-a)(S\_\mathcal{F} - x)^{-1}\right)f\_a + af\_a, g\_a\right) \\ &= \left(\left(x + (x-a)^2(S\_\mathcal{F} - x)^{-1}\right)f\_a, g\_a\right), \end{aligned}$$

which implies (5.4.29). -

Finally, the decomposition (5.4.28) in the previous corollary is used to show a similar direct sum decomposition for a = x.

**Corollary 5.4.16.** Let S be a semibounded relation in H with lower bound γ, let x<γ, and let tSK,x be the form corresponding to SK,x. Then

$$\text{dom}\,\mathbf{t}\_{\text{S}\_{\text{K},x}} = \text{ker}\,(S^\*-x) + \text{dom}\,\mathbf{t}\_{\text{S}\_{\text{P}}}\tag{5.4.31}$$

is a direct sum decomposition.

Proof. Let a<x<γ. Recall that the decomposition (5.4.28) holds since SK,x and S<sup>F</sup> are transversal.

It is clear that the right-hand side of (5.4.31) is contained in the left-hand side. Observe for this that

$$
\operatorname{dom} \mathfrak{t}\_{\mathcal{S}\_{\mathbb{P}}} \subset \operatorname{dom} \mathfrak{t}\_{\mathcal{S}\_{\mathbb{K},x}} \quad \text{and} \quad \ker \left( S^\* - x \right) \subset \operatorname{dom} S\_{\mathbb{K},x} \subset \operatorname{dom} \mathfrak{t}\_{\mathcal{S}\_{\mathbb{K},x}}.
$$

To show that the left-hand side of (5.4.31) is contained in the right-hand side, let f ∈ dom tSK,x . According to (5.4.28) one has f = f<sup>a</sup> + fF, with f<sup>a</sup> ∈ ker (S<sup>∗</sup> − a) and f<sup>F</sup> ∈ dom tS<sup>F</sup> . Define

$$f\_x = (I + (x - a)(S\_\mathcal{F} - x)^{-1})f\_a.$$

Then

$$f\_x \in \ker\left(S^\*-x\right) \quad \text{and} \quad f = f\_x + (f\_a - f\_x + f\_\mathcal{F}),$$

where the last term is in dom tS<sup>F</sup> since fa−f<sup>x</sup> ∈ dom SF. Hence, the left-hand side of (5.4.31) belongs to the right-hand side. Thus, the sum decomposition (5.4.31) has been shown.

Finally, it will be shown that the sum decomposition (5.4.31) is direct. For this assume that f<sup>x</sup> ∈ ker (S<sup>∗</sup> − x) is nontrivial and belongs to dom tS<sup>F</sup> . Then {fx, xfx} ∈ S<sup>∗</sup> implies that {fx, xfx} ∈ SF; cf. Theorem 5.3.3. Since x<γ, this is a contradiction. -

## **5.5 Boundary triplets for semibounded relations**

In this section semibounded self-adjoint extensions of semibounded symmetric relations are studied in the context of boundary triplets and their Weyl functions. The initial observations are general results about a closed symmetric relation S with a boundary triplet {G, Γ0, Γ1}, where A<sup>0</sup> = ker Γ<sup>0</sup> is semibounded. In particular, the Friedrichs and the Kre˘ın type extensions will be identified. In the remaining part of the section it will be assumed that S is a semibounded relation with a boundary triplet {G, Γ0, Γ1}, where A<sup>0</sup> = SF, and various specific properties are derived. The case where the self-adjoint extension A<sup>1</sup> = ker Γ<sup>1</sup> is also semibounded is of specific interest. As a preparation for the main results in the following section it will be explained how the corresponding semibounded form is the first stepping stone to the notion of boundary pair.

Let S be a closed symmetric relation in a Hilbert space H and let {G, Γ0, Γ1} be a boundary triplet for S∗. Assume that A<sup>0</sup> = ker Γ<sup>0</sup> is semibounded with lower bound γ<sup>0</sup> = m(A0). Then clearly S is semibounded and γ<sup>0</sup> ≤ m(S) = γ. Therefore, one may speak of the Friedrichs extension S<sup>F</sup> so that γ<sup>0</sup> ≤ m(SF) = γ and the Kre˘ın type extensions SK,x of S with x ≤ γ. The corresponding Weyl function M is holomorphic on <sup>ρ</sup>(A0) and, in particular, on <sup>C</sup> \ [γ0, <sup>∞</sup>). Moreover, one has

$$M(x) = \Gamma(\mathfrak{N}\_x(S^\*)) = \Gamma(S \stackrel{\frown}{+} \mathfrak{N}\_x(S^\*)) = \Gamma(S\_{\mathcal{K},x}), \quad x < \gamma\_0;\tag{5.5.1}$$

cf. Definition 2.3.4 and (5.4.3). By Corollary 2.3.8, the mapping x → M(x) from (−∞, γ0) to **B**(G) is nondecreasing. In particular, by Corollary 5.2.14 the limit M(−∞) exists in the strong resolvent sense,

$$\left(M(-\infty)-\lambda\right)^{-1} = \lim\_{x \downarrow -\infty} \left(M(x)-\lambda\right)^{-1},\tag{5.5.2}$$

and the limit M(γ0) exists in the strong resolvent sense,

$$\left(M(\gamma\_0) - \lambda\right)^{-1} = \lim\_{x \uparrow \gamma\_0} \left(M(x) - \lambda\right)^{-1},\tag{5.5.3}$$

where <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [γ0, <sup>∞</sup>). Then <sup>M</sup>(−∞) and <sup>M</sup>(γ0) are self-adjoint relations in <sup>G</sup>; cf. Theorem 5.2.11 and Corollary 5.2.14. In the following theorem the Friedrichs extension S<sup>F</sup> and the Kre˘ın type extension SK,x with x ≤ γ<sup>0</sup> will be characterized by means of the limits in (5.5.2) and (5.5.3).

**Theorem 5.5.1.** Let S be a closed semibounded relation in H with lower bound γ. Let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> and let M be the corresponding Weyl function. Assume that the self-adjoint extension A<sup>0</sup> = ker Γ<sup>0</sup> is semibounded with γ<sup>0</sup> = m(A0) ≤ m(SF) = γ. Then the Friedrichs extension S<sup>F</sup> of S is given by

$$S\_{\mathcal{F}} = \left\{ \widehat{f} \in S^\* : \{ \Gamma\_0 \widehat{f}, \Gamma\_1 \widehat{f} \} \in M(-\infty) \right\} \tag{5.5.4}$$

and the Kre˘ın type extension SK,x of S with x ≤ γ<sup>0</sup> is given by

$$S\_{\mathcal{K},x} = \{ \dot{f} \in S^\* : \{ \Gamma\_0 \dot{f}, \Gamma\_1 \dot{f} \} \in M(x) \}. \tag{5.5.5}$$

Proof. According to Theorem 5.4.10, the Friedrichs extension S<sup>F</sup> of S is given by the strong resolvent limit

$$(S\_{\mathcal{F}} - \lambda)^{-1}h = \lim\_{t \downarrow -\infty} (S\_{\mathcal{K},t} - \lambda)^{-1}h, \quad h \in \mathfrak{H},\tag{5.5.6}$$

and for each x ≤ γ<sup>0</sup> the Kre˘ın type extension SK,x is given by the strong resolvent limit

$$(S\_{\mathcal{K},x} - \lambda)^{-1}h = \lim\_{t \uparrow x} (S\_{\mathcal{K},t} - \lambda)^{-1}h, \quad h \in \mathfrak{H},\tag{5.5.7}$$

where <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [x, <sup>∞</sup>). The idea behind the proof of the theorem is to connect the limit formulas in (5.5.2) and (5.5.3) with the limit formulas in (5.5.6) and (5.5.7). This in fact will be done by means of the Kre˘ın formula. For this purpose observe that for t<γ<sup>0</sup> and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the resolvent formula in Theorem 2.6.1 for <sup>S</sup>K,t reads

$$\left(\left(S\_{\mathcal{K},t} - \lambda\right)^{-1} = \left(A\_0 - \lambda\right)^{-1} + \gamma(\lambda)\left(M(t) - M(\lambda)\right)^{-1}\gamma(\overline{\lambda})^\*,\tag{5.5.8}$$

due to (5.5.1). Here (M(t)−M(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G) by Theorem 2.6.1 and Theorem 2.6.2.

First consider the Kre˘ın type extension SK,x of S. If x<γ0, then the formula (5.5.5) is a direct consequence of (5.5.1). To treat the case x = γ<sup>0</sup> let Θ<sup>K</sup> be the self-adjoint relation in G which corresponds to SK,γ<sup>0</sup> , that is,

$$\mathcal{S}\_{\mathcal{K},\gamma\_0} = \left\{ \widehat{f} \in S^\* \, : \, \{ \Gamma\_0 \widehat{f}, \Gamma\_1 \widehat{f} \} \in \Theta\_{\mathcal{K}} \right\}.\tag{5.5.9}$$

Then again by the resolvent formula in Theorem 2.6.1 one has

$$(S\_{\mathcal{K},\gamma\_0} - \lambda)^{-1} = (A\_0 - \lambda)^{-1} + \gamma(\lambda) \left(\Theta\_{\mathcal{K}} - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\* \tag{5.5.10}$$

for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Here (Θ<sup>K</sup> <sup>−</sup>M(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G) by Theorem 2.6.1 and Theorem 2.6.2. Subtracting (5.5.10) from (5.5.8) leads to

$$\begin{split} \left( (S\_{\mathcal{K},t} - \lambda)^{-1} - (S\_{\mathcal{K},\gamma\_0} - \lambda)^{-1} \right. \\ \left. \quad \left. \quad - \gamma(\lambda) \right] \left[ \left( M(t) - M(\lambda) \right)^{-1} - \left( \Theta\_{\mathcal{K}} - M(\lambda) \right)^{-1} \right] \gamma(\overline{\lambda})^\* \end{split} \tag{5.5.11}$$

with t<γ0. Now take the strong limit for t ↑ γ<sup>0</sup> and apply (5.5.7). Then for each h ∈ H

$$(S\_{\mathcal{K},t} - \lambda)^{-1}h \to (S\_{\mathcal{K},\gamma\_0} - \lambda)^{-1}h \quad \text{as} \quad t \uparrow \gamma\_0,$$

which, via (5.5.11), leads to

$$\gamma(\lambda)\left[\left(M(t) - M(\lambda)\right)^{-1} - \left(\Theta\_{\mathcal{K}} - M(\lambda)\right)^{-1}\right]\gamma(\overline{\lambda})^\*h \to 0 \quad \text{as} \quad t \uparrow \gamma\_0.$$

Since γ(λ) maps G isomorphically onto ker (S∗−λ) and γ(λ)<sup>∗</sup> : H → G is surjective, see Proposition 2.3.2, it follows that for each ϕ ∈ G

$$\left(M(t) - M(\lambda)\right)^{-1} \varphi \to \left(\Theta\_{\mathcal{K}} - M(\lambda)\right)^{-1} \varphi \quad \text{as} \quad t \uparrow \gamma\_0. \tag{5.5.12}$$

Next the parameter Θ<sup>K</sup> will be identified with M(γ0). For this purpose, observe that for ϕ ∈ G

$$\left\{ (M(t) - M(\lambda))^{-1} \varphi, \varphi + M(\lambda)(M(t) - M(\lambda))^{-1} \varphi \right\} \in M(t). \tag{5.5.13}$$

As t ↑ γ0, the components on the left-hand side of (5.5.13) converge due to (5.5.12), while the bounded operators M(t) converge in the strong resolvent sense, and hence in the graph sense (see Corollary 1.9.6) to the self-adjoint relation M(γ0). Hence, (5.5.13) implies

$$\left\{ (\Theta\_{\mathcal{K}} - M(\lambda))^{-1} \varphi, \varphi + M(\lambda)(\Theta\_{\mathcal{K}} - M(\lambda))^{-1} \varphi \right\} \in M(\gamma\_0) \tag{5.5.14}$$

for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, and thus (Θ<sup>K</sup> <sup>−</sup>M(λ))−<sup>1</sup> <sup>⊂</sup> (M(γ0)−M(λ))−1. Since <sup>M</sup>(λ) <sup>∈</sup> **<sup>B</sup>**(G), it follows that Θ<sup>K</sup> ⊂ M(γ0), or, since both relations are self-adjoint, Θ<sup>K</sup> = M(γ0). Now (5.5.5) for x = γ<sup>0</sup> follows from (5.5.9).

Next consider the Friedrichs extension S<sup>F</sup> of S. Let Θ<sup>F</sup> be the self-adjoint relation in G which corresponds to SF, that is,

$$S\_{\mathcal{F}} = \left\{ \widehat{f} \in S^\* \, : \, \{ \Gamma\_0 \widehat{f}, \Gamma\_1 \widehat{f} \} \in \Theta\_{\mathcal{F}} \right\}.$$

Then again by the resolvent formula in Theorem 2.6.1 one has

$$(S\_\mathcal{F} - \lambda)^{-1} = (A\_0 - \lambda)^{-1} + \gamma(\lambda) \left(\Theta\_\mathcal{F} - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\* \tag{5.5.15}$$

for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. As above, (Θ<sup>F</sup> <sup>−</sup>M(λ))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G). Subtracting (5.5.15) from (5.5.8) and using the same reasoning as above involving Theorem 5.4.10 yields

$$\left(M(t) - M(\lambda)\right)^{-1} \varphi \to \left(\Theta\_{\mathcal{F}} - M(\lambda)\right)^{-1} \varphi \quad \text{as} \quad t \downarrow -\infty.$$

From the fact that M(−∞) is the strong resolvent limit, and hence the strong graph limit of M(t) when t ↓ −∞, one concludes Θ<sup>F</sup> = M(−∞) in the same way as in (5.5.13)–(5.5.14). This shows (5.5.4). -

The statements in the following corollary are consequences of Theorem 5.5.1 and Proposition 2.1.8.

**Corollary 5.5.2.** Let S be a closed symmetric relation, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let M be the corresponding Weyl function. Assume that the self-adjoint extension A<sup>0</sup> = ker Γ<sup>0</sup> is semibounded with lower bound γ0. Then the following statements hold:

(i) A<sup>0</sup> = S<sup>F</sup> if and only if M(−∞) = {0} × G;

(ii) A<sup>0</sup> ∩ S<sup>F</sup> = S if and only if M(−∞) is a closed operator;

(iii) <sup>A</sup><sup>0</sup> <sup>+</sup> <sup>S</sup><sup>F</sup> <sup>=</sup> <sup>S</sup><sup>∗</sup> if and only if <sup>M</sup>(−∞) <sup>∈</sup> **<sup>B</sup>**(G),

and, similarly

(iv) A<sup>0</sup> = SK,γ<sup>0</sup> if and only if M(γ0) = {0} × G;


Moreover, for A<sup>1</sup> = ker Γ<sup>1</sup> one has

(vii) A<sup>1</sup> = S<sup>F</sup> if and only if M(−∞) = G × {0}; (viii) A<sup>1</sup> = SK,γ<sup>0</sup> if and only if M(γ0) = G × {0}.

Such facts can also be stated in terms of the limit behavior of M(−∞) and M(γ0) via Corollary 5.2.14 applied to the Weyl function M. For this purpose recall the notations

$$\begin{aligned} \mathfrak{E}\_{\gamma\_0} &= \left\{ \varphi \in \mathfrak{G} : \lim\_{x \uparrow \gamma\_0} (M(x)\varphi, \varphi) < \infty \right\}, \\ \mathfrak{E}\_{-\infty} &= \left\{ \varphi \in \mathfrak{G} : \lim\_{x \downarrow -\infty} (M(x)\varphi, \varphi) > -\infty \right\}. \end{aligned}$$

**Corollary 5.5.3.** Let S be a closed symmetric relation, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let M be the corresponding Weyl function. Assume that the self-adjoint extension A<sup>0</sup> = ker Γ<sup>0</sup> is semibounded with lower bound γ0. Then the following statements hold:


and, similarly


In the context of Theorem 5.5.1, the Weyl function of the boundary triplet is holomorphic on the interval (−∞, γ0). In this situation the inverse result in Theorem 4.2.4 can be formulated as follows.

**Proposition 5.5.4.** Let G be a Hilbert space and let M be a uniformly strict **B**(G) valued Nevanlinna function, which is holomorphic on <sup>C</sup>\[γ0, <sup>∞</sup>) and not holomorphic at γ0. Then there exist a Hilbert space H, a closed simple symmetric operator S in H, and a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that A<sup>0</sup> is a semibounded self-adjoint relation with lower bound m(A0) = γ<sup>0</sup> and M is the corresponding Weyl function.

Proof. Let M be a uniformly strict Nevanlinna function with values in **B**(G). Let H(NM) be the reproducing kernel Hilbert space associated with the Nevanlinna kernel

$$\frac{M(\lambda) - M(\mu)^\*}{\lambda - \overline{\mu}}, \quad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}.$$

By Theorem 4.2.4, there exist a closed simple symmetric operator S in the reproducing kernel Hilbert space H(NM) and a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup>

such that M is the corresponding Weyl function. The assumption that M is holomorphic on (−∞, γ0) and the fact that S is simple imply, by Theorem 3.6.1, that (−∞, γ0) ⊂ ρ(A0). Moreover, since M is not holomorphic at γ0, one has γ<sup>0</sup> ∈ σ(A0). Therefore, the self-adjoint relation A<sup>0</sup> is semibounded with lower bound m(A0) = γ0. -

The context of Theorem 5.5.1 will now be narrowed. Let S be a closed semibounded relation in H. Then the existence of a semibounded self-adjoint extension of S is guaranteed by the Friedrichs extension S<sup>F</sup> of S. The interest in the rest of this section is in boundary triplets {G, Γ0, Γ1} for which A<sup>0</sup> = SF. The following result is a consequence of Theorem 2.4.1 since for any self-adjoint extension H of S there is a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that H = ker Γ0.

**Corollary 5.5.5.** Let S be a closed semibounded relation in H with lower bound γ. Then there exists a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that

$$S\_{\mathbb{F}} = \ker \Gamma\_0.$$

The corresponding Weyl function M is holomorphic on (−∞, γ) and the mapping x → M(x) from (−∞, γ) to **B**(G) is nondecreasing, while M(−∞) = {0} × G.

The following result will be useful in treating the connection between semibounded self-adjoint extensions and the Weyl function.

**Proposition 5.5.6.** Let S be a closed semibounded relation in H, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> such that S<sup>F</sup> = ker Γ0, and let M be the corresponding Weyl function. Let A<sup>Θ</sup> be a self-adjoint extension of S corresponding to the selfadjoint relation Θ in G and assume that x<m(S). Then M(x) ∈ **B**(G) and the following equivalence holds:

$$x \le A\_{\Theta} \quad \Leftrightarrow \quad M(x) \le \Theta.$$

In particular, if A<sup>Θ</sup> is semibounded in H, then Θ is semibounded in G.

Proof. The assumption x<m(S) = m(SF) implies that x ∈ ρ(SF) and hence M(x) ∈ **B**(G) is clear. The formula in Theorem 2.6.1, applied to A<sup>Θ</sup> and SF, gives

$$(A\_{\Theta} - x)^{-1} - (S\_{\mathcal{F}} - x)^{-1} = \gamma(x)(\Theta - M(x))^{-1}\gamma(x)^{\*},\tag{5.5.16}$$

since (S<sup>F</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H). Recall that <sup>γ</sup>(x)<sup>∗</sup> maps ker (S<sup>∗</sup> <sup>−</sup> <sup>x</sup>) onto <sup>G</sup>; see Proposition 2.3.2. Note that if, in addition, <sup>x</sup> <sup>∈</sup> <sup>ρ</sup>(AΘ), then (Θ <sup>−</sup> <sup>M</sup>(x))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G).

(⇒) Since A<sup>Θ</sup> is a semibounded self-adjoint extension of S, it follows from Proposition 5.3.6 that A<sup>Θ</sup> ≤ SF. Observe that for x ∈ ρ(AΘ)

$$0 \le (S\_{\mathcal{F}} - x)^{-1} \le (A\_{\Theta} - x)^{-1},$$

where both operators belong to **B**(H) since x ∈ ρ(SF) ∩ ρ(AΘ). Hence, it then follows from (5.5.16) that

$$(\Theta - M(x))^{-1} \ge 0 \quad \text{or, equivalently,} \quad \Theta - M(x) \ge 0.$$

Since M(x) ∈ **B**(G) it follows that M(x) ≤ Θ; cf. Proposition 5.2.6.

Now let x ≤ AΘ. First assume that x<m(AΘ). In this case, x ∈ ρ(AΘ) and thus M(x) ≤ Θ. Next, assume that x = m(AΘ) and consider an increasing sequence x<sup>n</sup> whose limit is x. Then clearly M(xn) ≤ Θ and thus M(x) ≤ Θ; cf. Corollary 5.2.12 (i).

(⇐) Assume that M(x) ≤ Θ. Then Θ − M(x) ≥ 0 by Proposition 5.2.6 and hence (Θ−M(x))−<sup>1</sup> <sup>≥</sup> 0. It is straightforward to see that the right-hand side of (5.5.16) is a nonnegative relation. Thus, the relation

$$(A\_{\Theta} - x)^{-1} - (S\_{\mathcal{F}} - x)^{-1}$$

on the left-hand side of (5.5.16) is also nonnegative and, in fact, this relation is also self-adjoint since (A<sup>Θ</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> is self-adjoint and (S<sup>F</sup> <sup>−</sup> <sup>x</sup>)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H) is self-adjoint. Therefore, one concludes with the help of Proposition 5.2.6 and x<m(SF) that

$$0 \le (S\_\mathcal{F} - x)^{-1} \le (A\_\Theta - x)^{-1}.$$

In particular, this shows that 0 <sup>≤</sup> <sup>A</sup><sup>Θ</sup> <sup>−</sup> <sup>x</sup> or <sup>x</sup> <sup>≤</sup> <sup>A</sup>Θ. -

From Proposition 5.5.6 one sees that if the self-adjoint extension A<sup>Θ</sup> is semibounded in H, then the corresponding self-adjoint relation Θ is semibounded in G. The converse is not true in general; cf. Remark 5.6.16. However, in Proposition 5.5.8 below it will be shown that the converse holds if S has finite defect numbers or S<sup>F</sup> has a compact resolvent. The following result is a preliminary observation.

**Lemma 5.5.7.** Let S be a closed semibounded relation in H with lower bound γ and let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> such that S<sup>F</sup> = ker Γ0. Let M be the corresponding Weyl function and assume that for any C > 0 there exists x<sup>1</sup> < γ such that

$$M(x) \le -C, \qquad x \le x\_1. \tag{5.5.17}$$

Then for every semibounded self-adjoint relation Θ in G the corresponding selfadjoint extension A<sup>Θ</sup> is semibounded from below.

Proof. Let Θ be a self-adjoint relation in G with lower bound ν and choose C > 0 in (5.5.17) such that −C<ν ≤ Θ. For all x ≤ x<sup>1</sup> one then has

$$0 < \nu + C \le \Theta + C \le \Theta - M(x),$$

which implies that Θ − M(x) is boundedly invertible for all x ≤ x1. From Theorem 2.6.2 one concludes (−∞, x1) ∈ ρ(AΘ) and hence A<sup>Θ</sup> is semibounded from below. This conclusion also follows from Proposition 5.5.6. -

Now it will be shown that the condition (5.5.17) holds when S has finite defect numbers or S<sup>F</sup> has a compact resolvent.

**Proposition 5.5.8.** Let S be a closed semibounded relation in H with lower bound γ and let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> such that S<sup>F</sup> = ker Γ0. Assume that one of the following conditions hold:


Then for any C > 0 there exists x<sup>1</sup> < γ such that (5.5.17) holds. In particular, if (i) holds, then all self-adjoint extensions of S in H are semibounded from below, or if (ii) holds and Θ is a semibounded self-adjoint relation in G, then the self-adjoint extension A<sup>Θ</sup> of S is semibounded from below.

Proof. (i) As S<sup>F</sup> = ker Γ0, one has (M(x)ϕ, ϕ) → −∞ for x → −∞ and all ϕ ∈ G by Corollary 5.5.3 (i). Since G is finite-dimensional, a compactness argument shows that there exists x<sup>1</sup> < γ such that (5.5.17) holds. Every self-adjoint relation Θ in the finite-dimensional space G is semibounded and hence it follows from Lemma 5.5.7 that all self-adjoint extensions A<sup>Θ</sup> are semibounded.

(ii) Recall from Proposition 5.4.12 that the resolvents of the Kre˘ın type extensions <sup>S</sup>K,t converge uniformly to the resolvent of <sup>S</sup>F, that is, for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [γ, <sup>∞</sup>) one has

$$\lim\_{t \downarrow -\infty} \|(S\_{\mathcal{K},t} - \lambda)^{-1} - (S\_{\mathcal{F}} - \lambda)^{-1}\| = 0. \tag{5.5.18}$$

In the following fix some λ = x<sup>0</sup> < m(SF) and note that, by (5.5.18), there exists t - < x<sup>0</sup> such that x<sup>0</sup> ∈ ρ(SK,t) for all t ≤ t - . Using (5.5.1) it follows that the resolvent of SK,t has the form

$$(S\_{\mathcal{K},t} - x\_0)^{-1} - (S\_{\mathcal{F}} - x\_0)^{-1} = \gamma(x\_0) \left( M(t) - M(x\_0) \right)^{-1} \gamma(x\_0)^\*,$$

where (M(t)−M(x0))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G) for all <sup>t</sup> <sup>≤</sup> <sup>t</sup> by Theorem 2.6.1 and Theorem 2.6.2. Since γ(x0) maps G isomorphically to Nx<sup>0</sup> (S∗) and γ(x0)<sup>∗</sup> maps Nx<sup>0</sup> (S∗) isomorphically to G, it follows together with (5.5.18) that

$$\lim\_{t \downarrow -\infty} \left\| \left( M(t) - M(x\_0) \right)^{-1} \right\| = 0. \tag{5.5.19}$$

This implies that for any C > 0 there exists x<sup>1</sup> < γ such that (5.5.17) holds. In fact, otherwise there exists some C<sup>0</sup> > 0 and a sequence s<sup>n</sup> → −∞ such that s<sup>n</sup> < t-< x<sup>0</sup> and M(sn) > −C0. Then the estimate

$$-C\_0 - \|M(x\_0)\| \le -C\_0 - M(x\_0) \le M(s\_n) - M(x\_0) \le 0$$

and (M(sn) <sup>−</sup> <sup>M</sup>(x0))−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G) contradict (5.5.19). Therefore, the condition (5.5.17) is satisfied and if Θ is a semibounded self-adjoint relation in G, then by Lemma 5.5.7 the corresponding self-adjoint extension A<sup>Θ</sup> is semibounded. -

In the case of Corollary 5.5.5 the relationship between the Friedrichs extension S<sup>F</sup> and the Kre˘ın type extension SK,γ is described in the following corollary, which is a translation of (iv)–(vi) in Corollary 5.5.2; cf. Corollary 5.4.13 and Corollary 5.4.14.

**Corollary 5.5.9.** Let S be a closed semibounded relation in H. Let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> such that S<sup>F</sup> = ker Γ<sup>0</sup> and let M be the corresponding Weyl function. Let γ = m(SF). Then the following statements hold:


In general it may not be possible to simultaneously prescribe ker Γ<sup>0</sup> as the Friedrichs extension S<sup>F</sup> and ker Γ<sup>1</sup> as the Kre˘ın type extension SK,γ, since ker Γ<sup>0</sup> and ker Γ<sup>1</sup> are necessarily transversal; cf. Section 2.1. However, note that the Friedrichs extension S<sup>F</sup> and the Kre˘ın type extension SK,x for x<γ are automatically transversal; cf. (5.4.26).

**Proposition 5.5.10.** Let S be a closed semibounded relation in H with lower bound γ. Then the following statements hold:

(i) For x<γ there exists a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that

$$S\_{\mathcal{F}} = \ker \Gamma\_0 \quad \text{and} \quad S\_{\mathcal{K},x} = \ker \Gamma\_1. \tag{5.5.20}$$

(ii) If S<sup>F</sup> and SK,γ are transversal, then there exists a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that

$$S\_{\mathcal{F}} = \ker \Gamma\_0 \quad \text{and} \quad S\_{\mathbb{K}, \gamma} = \ker \Gamma\_1.$$

In both cases the corresponding Weyl function satisfies M(−∞) = {0} × G and M(x) = G × {0}, x ≤ γ. In particular, M(t) ≤ 0 for all t ≤ x, i.e., M belongs to the class **S**−<sup>1</sup> <sup>G</sup> (−∞, x) of inverse Stieltjes functions.

Proof. (i) The extensions S<sup>F</sup> and SK,x for x<γ are automatically transversal according to (5.4.26). Hence, it follows from Theorem 2.5.9 that there exists a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that (5.5.20) holds.

(ii) Since it is assumed that S<sup>F</sup> and SK,γ are transversal, Theorem 2.5.9 yields the statement.

The Weyl function M satisfies M(−∞) = {0} × G and M(x) = G × {0} as a consequence of Corollary 5.5.2. Finally, that M(t) ≤ 0 for all t ≤ x is a consequence of the monotonicity of the Weyl function M in Corollary 2.3.8. The assertion <sup>M</sup> <sup>∈</sup> **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, x) is immediate from the definition of the inverse Stieltjes class in Definition A.6.1. -

The following corollary is a consequence of Proposition 5.5.4 and Corollary 5.5.2.

**Corollary 5.5.11.** Let G be a Hilbert space and let M be a uniformly strict **B**(G) valued Nevanlinna function, which is holomorphic on <sup>C</sup> \ [γ, <sup>∞</sup>) and not holomorphic at γ. Assume, in addition, that

$$M(-\infty) = \{0\} \times \mathcal{G} \quad \text{and} \quad M(\gamma) = \mathcal{G} \times \{0\}. \tag{5.5.21}$$

Then there exist a Hilbert space H, a closed simple semibounded operator S in H with lower bound γ, and a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> with S<sup>F</sup> = ker Γ<sup>0</sup> and SK,γ = ker Γ1, such that M is the corresponding Weyl function.

Proof. It follows from Proposition 5.5.4 that there exist a Hilbert space H, a closed simple symmetric operator S in H, and a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that A<sup>0</sup> = ker Γ<sup>0</sup> is a semibounded self-adjoint relation with lower bound m(A0) = γ and M is the corresponding Weyl function. The assumptions in (5.5.21) and Corollary 5.5.2 (i) and (viii) imply S<sup>F</sup> = ker Γ<sup>0</sup> = A<sup>0</sup> and SK,γ = ker Γ1. Since m(SF) = m(A0) = γ, it is also clear that the symmetric operator S is semibounded with lower bound γ. -

In the next corollary a boundary triplet with the properties as in Proposition 5.5.10 (i) is exhibited.

**Corollary 5.5.12.** Let S be a closed semibounded relation in H with lower bound γ. Then

$$S^\* = S\_F \hat{+} \dot{\mathcal{W}}\_x(S^\*), \quad x < \gamma,\tag{5.5.22}$$

is a direct sum decomposition. Let f <sup>=</sup> {f,f- } ∈ S<sup>∗</sup> have the unique decomposition

$$
\widehat{f} = \widehat{f}\_{\mathbb{F}} + \widehat{f}\_x,
$$

with f <sup>F</sup> = {fF, f- <sup>F</sup>} ∈ S<sup>F</sup> and f <sup>x</sup> <sup>=</sup> {fx, xfx} ∈ <sup>N</sup> <sup>x</sup>(S∗). Then

$$
\Gamma\_0 \widehat{f} = f\_x \quad \text{and} \quad \Gamma\_1 \widehat{f} = P\_{\mathfrak{N}\_x(S^\*)} (f'\_\mathcal{F} - xf\_\mathcal{F}).
$$

defines a boundary triplet {Nx(S∗), Γ0, Γ1} for S<sup>∗</sup> such that (5.5.20) holds. For λ ∈ ρ(SF) the corresponding γ-field γ is given by

$$\gamma(\lambda) = \left(I + (\lambda - x)(S\_{\mathcal{F}} - \lambda)^{-1}\right)\iota\_{\mathfrak{N}\_x(S^\*)},\tag{5.5.23}$$

and the corresponding Weyl function M is given by

$$M(\lambda) = \lambda - x + (\lambda - x)^2 P\_{\mathfrak{N}\_x(S^\*)} (S\_F - \lambda)^{-1} \iota\_{\mathfrak{N}\_x(S^\*)}.\tag{5.5.24}$$

Proof. It is clear from Theorem 1.7.1 that (5.5.22) is a direct sum decomposition. Now choose μ = x in Theorem 2.4.1 and modify the boundary triplet {Nx(S∗), Γ0, Γ1} in Theorem 2.4.1 to {Nx(S∗), Γ0, Γ<sup>1</sup> − xΓ0}. Then S<sup>F</sup> = ker Γ<sup>0</sup>

and the corresponding γ-field and Weyl function have the form (5.5.23) and (5.5.24); cf. Theorem 2.4.1 and Corollary 2.5.5. It is easy to see that

$$S \stackrel{\frown}{\rightarrow} \ddot{\mathfrak{N}}\_x(S^\*) \subset \ker \Gamma\_1,$$

and since <sup>S</sup>K,x <sup>=</sup> <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>x</sup>(S∗) and ker Γ<sup>1</sup> are both self-adjoint, one concludes that SK,x = ker Γ1, so that (5.5.20) holds. -

The following example is an illustration of Proposition 5.5.10 (i) and Corollary 5.5.12 for the case where the semibounded relation S is uniformly positive. In this situation it is convenient to have a boundary triplet for which the Kre˘ın–von Neumann extension SK,<sup>0</sup> corresponds to the boundary mapping Γ1; cf. Chapter 8.

**Example 5.5.13.** Let S be a closed nonnegative symmetric relation in H with lower bound γ > 0. In this case the Kre˘ın–von Neumann extension SK,<sup>0</sup> is given by <sup>S</sup>K,<sup>0</sup> <sup>=</sup> <sup>S</sup> <sup>+</sup> <sup>N</sup> <sup>0</sup>(S∗); cf. (5.4.3). Moreover, the Friedrichs extension <sup>S</sup><sup>F</sup> and the Kre˘ın–von Neumann extension SK,<sup>0</sup> are transversal by (5.4.26). For x = 0 Corollary 5.5.12 shows that {N0(S∗), Γ0, Γ1}, where

$$
\Gamma\_0 \widehat{f} = f\_0 \quad \text{and} \quad \Gamma\_1 \widehat{f} = P\_{\mathfrak{N}\_0(S^\*)} f'\_{\mathcal{F}}, \qquad \widehat{f} = \{f\_{\mathcal{F}}, f'\_{\mathcal{F}}\} + \{f\_0, 0\},
$$

is a boundary triplet for <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>S</sup><sup>F</sup> <sup>+</sup> <sup>N</sup> <sup>0</sup>(S∗) such that

$$S\_{\mathcal{F}} = \ker \Gamma\_0 \quad \text{and} \quad S\_{\mathcal{K},0} = \ker \Gamma\_1;$$

moreover, for λ ∈ ρ(SF) the corresponding γ-field γ is given by

$$\gamma(\lambda) = \left(I + \lambda \left(S\_{\mathcal{F}} - \lambda\right)^{-1}\right) \iota\_{\mathfrak{N}\_0(S^\*)},$$

and the corresponding Weyl function M is given by

$$M(\lambda) = \lambda + \lambda^2 P\_{\mathfrak{N}\_0(S^\*)} (S\_{\mathbb{F}} - \lambda)^{-1} \iota\_{\mathfrak{N}\_0(S^\*)} \cdot$$

Note that, in particular, γ(0) = ιN0(S∗) is the canonical embedding of N0(S∗) into H, γ(0)<sup>∗</sup> = PN0(S∗) is the orthogonal projection onto N0(S∗), and M(0) = 0.

The last objective in this section is to derive an abstract first Green identity. For this consider a boundary triplet {G, Γ0, Γ1} for which S<sup>F</sup> = ker Γ<sup>0</sup> and assume that also the self-adjoint extension corresponding to Γ<sup>1</sup> is semibounded. In the following the notation S<sup>1</sup> = ker Γ<sup>1</sup> (instead of A1) is used; this will turn out to be more convenient for the next section. As a first step rewrite the abstract Green identity (2.1.1) in the form

$$(f',g) - (\Gamma\_1 \dot{f}, \Gamma\_0 \hat{g}) = (f, g') - (\Gamma\_0 \dot{f}, \Gamma\_1 \hat{g}), \quad \dot{f}, \hat{g} \in S^\*. \tag{5.5.25}$$

In the following theorem it will be shown that the expression on the left-hand side, and hence on the right-hand side, of (5.5.25) can be seen as a restriction of the form tS<sup>1</sup> .

**Theorem 5.5.14.** Let S be a closed semibounded relation in H and let {G, Γ0, Γ1} be a boundary triplet for S∗ such that

$$S\_{\mathcal{F}} = \ker \Gamma\_0 \quad \text{and} \quad S\_1 = \ker \Gamma\_1,\tag{5.5.26}$$

where S<sup>F</sup> is the Friedrichs extension and S<sup>1</sup> is a semibounded self-adjoint extension of S. Moreover, let t<sup>S</sup><sup>1</sup> be the closed semibounded form corresponding to S1. Then dom S<sup>∗</sup> ⊂ dom t<sup>S</sup><sup>1</sup> and the following equality holds:

$$\mathbf{t}(f',g) = \mathbf{t}\_{S\_1}[f,g] + (\Gamma\_1 \dot{f}, \Gamma\_0 \widehat{g}), \quad \dot{f}, \widehat{g} \in S^\*. \tag{5.5.27}$$

Proof. By the assumption (5.5.26), the extensions S<sup>F</sup> and S<sup>1</sup> are transversal and hence it follows from Theorem 5.3.8 that dom S<sup>∗</sup> ⊂ dom tS<sup>1</sup> . Moreover, every f, <sup>g</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> can be decomposed as

$$
\widehat{f} = \widehat{f}\_{\mathcal{F}} + \widehat{f}\_1, \quad \widehat{g} = \widehat{g}\_{\mathcal{F}} + \widehat{g}\_1, \quad \widehat{f}\_{\mathcal{F}}, \widehat{g}\_{\mathcal{F}} \in S\_{\mathcal{F}}, \quad \widehat{f}\_1, \widehat{g}\_1 \in S\_1.
$$

Using the conditions in (5.5.26) one sees that

$$
\Gamma\_0 \widehat{f}\_\mathcal{F} = \Gamma\_0 \widehat{g}\_\mathcal{F} = 0 \quad \text{and} \quad \Gamma\_1 \widehat{f}\_1 = \Gamma\_1 \widehat{g}\_1 = 0,\tag{5.5.28}
$$

and therefore the identity (5.5.25) can be rewritten as

$$\begin{split} (f',g) - (\Gamma\_1 \dot{f}, \Gamma\_0 \hat{g}) &= (f,g') - (\Gamma\_0 \dot{f}, \Gamma\_1 \hat{g}) \\ &= (f\_\mathcal{F} + f\_1, g\_\mathcal{F}' + g\_1') - (\Gamma\_0 \hat{f}\_1, \Gamma\_1 \hat{g}\_\mathcal{F}). \end{split} \tag{5.5.29}$$

In order to rewrite the last term on the right-hand side of (5.5.29) observe that f <sup>1</sup>, <sup>g</sup><sup>F</sup> <sup>∈</sup> <sup>S</sup>∗. Therefore, another application of the abstract Green identity for the boundary triplet {G, Γ0, Γ1} shows that

$$\begin{split} (f\_1', g\_\mathcal{F}) - (f\_1, g\_\mathcal{F}') &= (\Gamma\_1 \ddot{f}\_1, \Gamma\_0 \hat{g}\_\mathcal{F}) - (\Gamma\_0 \ddot{f}\_1, \Gamma\_1 \hat{g}\_\mathcal{F}) \\ &= -(\Gamma\_0 \hat{f}\_1, \Gamma\_1 \hat{g}\_\mathcal{F}), \end{split} \tag{5.5.30}$$

where (5.5.28) was used in the last equality. A combination of (5.5.29) and (5.5.30) gives

$$(f',g) - (\Gamma\_1 \dot{f}, \Gamma\_0 \hat{g}) = (f\_\mathcal{F}, g\_\mathcal{F}') + (f\_\mathcal{F}, g\_1') + (f\_1, g\_1') + (f\_1', g\_\mathcal{F}).\tag{5.5.31}$$

Since dom S<sup>∗</sup> ⊂ dom tS<sup>1</sup> , the last three terms on the right-hand side of (5.5.31) can be rewritten by means of Theorem 5.1.18:

$$\mathbf{t}\_{\mathbf{t}}(f\_{\mathbf{F}}, g\_1') = \mathbf{t}\_{\mathcal{S}\_1}[f\_{\mathbf{F}}, g\_1], \quad (f\_1, g\_1') = \mathbf{t}\_{\mathcal{S}\_1}[f\_1, g\_1], \quad (f\_1', g\_{\mathbf{F}}) = \mathbf{t}\_{\mathcal{S}\_1}[f\_1, g\_{\mathbf{F}}],$$

whereas the first term on the right-hand side of (5.5.31) can be rewritten by means of Theorem 5.1.18 and the inclusion tS<sup>F</sup> ⊂ tS<sup>1</sup> as follows:

$$(f\_{\mathcal{F}}, g\_{\mathcal{F}}') = \mathfrak{t}\_{\mathcal{S}\mathfrak{r}} [f\_{\mathcal{F}}, g\_{\mathcal{F}}] = \mathfrak{t}\_{\mathcal{S}\_1} [f\_{\mathcal{F}}, g\_{\mathcal{F}}].$$

Combined with (5.5.31), the above rewriting leads to

$$\begin{aligned} \mathbf{t}\left(f',g\right) - \left(\Gamma\_1 \dot{f}, \Gamma\_0 \hat{g}\right) &= \mathbf{t}\_{S\_1} [f\_\mathcal{F}, g\_\mathcal{F}] + \mathbf{t}\_{S\_1} [f\_\mathcal{F}, g\_1] + \mathbf{t}\_{S\_1} [f\_1, g\_1] + \mathbf{t}\_{S\_1} [f\_1, g\_\mathcal{F}] \\ &= \mathbf{t}\_{S\_1} [f\mathbf{r} + f\_1, g\mathbf{r} + g\_1] \\ &= \mathbf{t}\_{S\_1} [f, g], \end{aligned}$$

and hence (5.5.27) has been shown. -

Let Θ be a self-adjoint relation in G and consider the corresponding selfadjoint extension

$$H\_{\Theta} = \{ \dot{f} \in S^\* : \{ \Gamma\_0 \dot{f}, \Gamma\_1 \dot{f} \} \in \Theta \};\tag{5.5.32}$$

here the notation H<sup>Θ</sup> (instead of AΘ) is used, which is more convenient for the next section. For f <sup>∈</sup> <sup>S</sup><sup>∗</sup> the condition {Γ0f, <sup>Γ</sup>1<sup>f</sup> } ∈ Θ is equivalent to

$$
\Gamma\_0 \dot{f} \in \text{dom}\,\Theta\_{\text{op}} \quad \text{and} \quad P\_{\text{op}} \Gamma\_1 \dot{f} = \Theta\_{\text{op}} \Gamma\_0 \dot{f}, \tag{5.5.33}
$$

where Pop denotes the orthogonal projection from G onto Gop = dom Θ and Θop is the self-adjoint operator part of the self-adjoint relation Θ; cf. the end of Section 2.2. The following statement is an immediate consequence of Theorem 5.5.14.

**Corollary 5.5.15.** Let S be a closed semibounded relation in H and let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> such that S<sup>F</sup> = ker Γ<sup>0</sup> and S<sup>1</sup> = ker Γ1, where S<sup>F</sup> is the Friedrichs extension and S<sup>1</sup> is a semibounded self-adjoint extension of S. Let tS<sup>1</sup> be the closed semibounded form corresponding to S1. Assume that H<sup>Θ</sup> is a self-adjoint extension of S corresponding to the self-adjoint relation Θ in G. Then

$$\mathbf{t}(f',g) = \mathbf{t}\_{S\_1}[f,g] + (\Theta\_{\text{op}}\,\Gamma\_0\dot{f}, \Gamma\_0\widehat{g}), \quad \dot{f}, \widehat{g} \in H\_{\Theta}.\tag{5.5.34}$$

Under the assumption that H<sup>Θ</sup> is semibounded, the left-hand side of (5.5.34) can be written as tH<sup>Θ</sup> [f,g]. One may view the identity (5.5.34) as a perturbation of the form <sup>t</sup>S<sup>1</sup> by means of the term (Θop <sup>Γ</sup>0f, <sup>Γ</sup>0g). The proper interpretation of (5.5.34) in terms of quadratic forms requires an extension of the mapping Γ0; this procedure will be taken up in detail in Section 5.6 with the introduction of the notion of a boundary pair.

## **5.6 Boundary pairs and boundary triplets**

In this section the notion of a boundary pair for a semibounded symmetric relation in a Hilbert space H is developed. It will turn out that there is an intimate connection between boundary pairs and boundary triplets. In fact, a boundary pair helps to express the closed semibounded form associated with a semibounded self-adjoint extension in terms of the parameter provided by the boundary triplet. The concept of a boundary pair is motivated by applications occurring in the study of semibounded differential operators.

**Definition 5.6.1.** Let S be a closed semibounded relation in H and let S<sup>1</sup> be a semibounded self-adjoint extension of S such that S<sup>1</sup> and the Friedrichs extension S<sup>F</sup> are transversal. Let t<sup>S</sup><sup>1</sup> be the closed form associated with S<sup>1</sup> and let

$$
\mathfrak{H}\_{\mathfrak{t}\_{\mathcal{S}\_1}-a} = \left( \operatorname{dom} \mathfrak{t}\_{\mathcal{S}\_1}, (\cdot, \cdot)\_{\mathfrak{t}\_{\mathcal{S}\_1}-a} \right), \quad a < m(S\_1),
$$

be the corresponding Hilbert space. A pair {G,Λ} is called a boundary pair for S corresponding to S<sup>1</sup> if G is a Hilbert space and Λ ∈ **B**(H<sup>t</sup>S1−<sup>a</sup>, G) satisfies

$$
\ker \Lambda = \text{dom} \, \mathbf{t}\_{\mathbb{S}^p} \quad \text{and} \qquad \text{ran} \, \Lambda = \mathbb{G}.
$$

Let S be a closed semibounded relation in H, let S<sup>F</sup> be the Friedrichs extension of S, and assume that {G,Λ} is a boundary pair for S corresponding to a semibounded self-adjoint extension S<sup>1</sup> of S. Then, by Definition 5.6.1, the semibounded self-adjoint extensions S<sup>1</sup> and S<sup>F</sup> are transversal, which, by Theorem 5.3.8, is equivalent to the inclusion

$$\operatorname{dom} S^\* \subset \operatorname{dom} \mathfrak{t}\_{\mathbb{S}\_1} = \operatorname{dom} \Lambda.$$

One also has the orthogonal decomposition

$$\mathfrak{H}\_{\mathfrak{t}\_{\mathcal{S}\_1}-a} = \ker \left( S^\*-a \right) \oplus\_{\mathfrak{t}\_{\mathcal{S}\_1}-a} \mathfrak{H}\_{\mathfrak{t}\_{\mathcal{S}\_\mathcal{F}}-a}, \quad a < m(S\_1) \le m(S\_\mathcal{F});$$

cf. Proposition 5.3.7 and Theorem 5.3.8. Since ker Λ = dom tS<sup>F</sup> and ran Λ = G, one sees that the restriction of Λ to the space ker (S<sup>∗</sup> −a) equipped with the norm · <sup>t</sup>S1−a, is a bounded mapping from ker (S<sup>∗</sup> − a) to G such that

$$\text{ran}\left(\Lambda \restriction \ker\left(S^\*-a\right)\right) = \mathcal{G}.$$

Hence, the restriction Λ ker (S<sup>∗</sup> <sup>−</sup> <sup>a</sup>) has a bounded everywhere defined inverse.

Boundary pairs have a useful invariance property. To see this consider a pair of semibounded self-adjoint extensions S<sup>1</sup> and S<sup>2</sup> of S which are each transversal with SF, i.e.,

$$S^\* = S\_1 \stackrel{\frown}{+} S\_\mathcal{F} = S\_2 \stackrel{\frown}{+} S\_\mathcal{F}.\tag{5.6.1}$$

**Lemma 5.6.2.** Let S be a closed semibounded relation in H, let S<sup>1</sup> and S<sup>2</sup> be semibounded self-adjoint extensions which satisfy the transversality conditions (5.6.1), and assume that a < min {m(S1), m(S2)}. Then dom tS<sup>1</sup> = dom tS<sup>2</sup> and the form topologies of tS<sup>1</sup> and tS<sup>2</sup> coincide. Consequently, {G,Λ} is a boundary pair for S corresponding to S<sup>1</sup> if and only if {G,Λ} is a boundary pair for S corresponding to S2.

Proof. It suffices to consider the boundedness property, as the other properties of a boundary pair in Definition 5.6.1 do not depend on the choice of the transversal extensions S<sup>1</sup> and S2. In fact, according to Corollary 5.3.9, the transversality conditions in (5.6.1) imply that

$$\text{dom}\,(S\_1 - a)^{\frac{1}{2}} = \text{dom}\,(S\_2 - a)^{\frac{1}{2}}, \quad a < \min\left\{m(S\_1), m(S\_2)\right\}.$$

Moreover, again by Corollary 5.3.9, this implies that

$$c\_1 \| ((S\_1)\_{\text{op}} - a)^{\frac{\mathsf{i}}{2}} \varphi \|^{2} \leq \| ((S\_2)\_{\text{op}} - a)^{\frac{\mathsf{i}}{2}} \varphi \|^{2} \leq c\_2 \| ((S\_1)\_{\text{op}} - a)^{\frac{\mathsf{i}}{2}} \varphi \|^{2}$$

for all ϕ ∈ dom (S<sup>2</sup> − a) 1 <sup>2</sup> = dom (S<sup>1</sup> − a) 1 <sup>2</sup> and for some constants c1, c<sup>2</sup> > 0. In other words, c1(t<sup>S</sup><sup>1</sup> − a) ≤ t<sup>S</sup><sup>2</sup> − a ≤ c2(t<sup>S</sup><sup>1</sup> − a) for some constants c1, c<sup>2</sup> > 0. Therefore, the form topologies of t<sup>S</sup><sup>1</sup> and t<sup>S</sup><sup>2</sup> coincide and thus Λ ∈ **B**(H<sup>t</sup>S1−<sup>a</sup>, G) if and only if Λ ∈ **B**(H<sup>t</sup>S2−<sup>a</sup>, G). This implies that {G,Λ} is a boundary pair for S corresponding to S<sup>1</sup> if and only if {G,Λ} is a boundary pair for S corresponding to S2. -

To explore the connection between boundary pairs and boundary triplets, the notion of extension in the next definition will be important.

**Definition 5.6.3.** Let S be a closed semibounded relation in H, let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ<sup>0</sup> and assume that mul A<sup>0</sup> = mul S∗. Let S<sup>1</sup> be a semibounded self-adjoint extension of S such that S<sup>1</sup> and S<sup>F</sup> are transversal, so that, dom S<sup>∗</sup> ⊂ dom tS<sup>1</sup> . Then an operator Λ ∈ **B**(HtS1−a, G) is said to be an extension of Γ<sup>0</sup> if

$$
\Gamma\_0 \dot{f} = \Lambda f \quad \text{for all} \ \dot{f} = \{f, f'\} \in S^\*. \tag{5.6.2}
$$

It follows already from the assumption mul A<sup>0</sup> = mul S<sup>∗</sup> that the mapping f → Γ0f from dom <sup>S</sup><sup>∗</sup> to <sup>G</sup> in (5.6.2) is an operator. In fact, if <sup>f</sup> <sup>=</sup> {0, f- } ∈ S∗, then f <sup>∈</sup> <sup>A</sup><sup>0</sup> = ker Γ0, so that Γ0<sup>f</sup> = 0. Note also that in the case <sup>A</sup><sup>0</sup> <sup>=</sup> <sup>S</sup><sup>F</sup> the condition mul A<sup>0</sup> = mul S<sup>∗</sup> is satisfied by Theorem 5.3.3.

Next the notion of a compatible boundary pair will be introduced.

**Definition 5.6.4.** Let S be a closed semibounded relation in H and let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with

$$A\_0 = \ker \Gamma\_0 \quad \text{and} \quad A\_1 = \ker \Gamma\_1.$$

Let S<sup>1</sup> be a semibounded self-adjoint extension of S such that S<sup>1</sup> and S<sup>F</sup> are transversal and let {G,Λ} be a boundary pair for S corresponding to S1. Then {G, Γ0, Γ1} and {G,Λ} are said to be compatible if Λ is an extension of Γ<sup>0</sup> and the self-adjoint relations A<sup>1</sup> and S<sup>1</sup> coincide.

The next lemma provides a sufficient condition for an extension Λ of Γ<sup>0</sup> such that {G,Λ} is a boundary pair, or compatible boundary pair, for S corresponding to S1. In the special case where the defect numbers of S are finite this condition is automatically satisfied, which makes the lemma useful in applications to Sturm– Liouville operators in Chapter 6.

**Lemma 5.6.5.** Let S be a closed semibounded relation in H and let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> with A<sup>0</sup> = ker Γ<sup>0</sup> and A<sup>1</sup> = ker Γ1. Let S<sup>1</sup> be a semibounded self-adjoint extension of S such that S<sup>1</sup> and S<sup>F</sup> are transversal or, equivalently, dom S<sup>∗</sup> ⊂ dom tS<sup>1</sup> , and let a<m(S1). Then the following statements hold:

$$\begin{aligned} \text{(i)} \quad &If \, \Lambda \in \mathbf{B}(\mathfrak{H}\_{\mathbf{t}\_{\mathcal{S}\_1}-a}, \mathcal{G}) \text{ is an extension of } \Gamma\_0, \text{ then } \operatorname{ran}\Lambda = \mathcal{G} \text{ and} \\ &\qquad \operatorname{dom}\mathfrak{t}\_{\mathcal{S}\_{\mathbb{P}}} \subset \ker \Lambda. \end{aligned}$$

(ii) If Λ ∈ **B**(H<sup>t</sup>S1−<sup>a</sup>, G) is an extension of Γ<sup>0</sup> and

$$\text{dom}\,\mathbf{t}\_{\text{S}\!\!p} = \text{ker}\,\Lambda,\tag{5.6.3}$$

then A<sup>0</sup> = ker Γ<sup>0</sup> = S<sup>F</sup> and {G,Λ} is a boundary pair for S corresponding to S1. If, in addition, A<sup>1</sup> = S1, then {G, Γ0, Γ1} and {G,Λ} are compatible.

In particular, if Λ ∈ **B**(HtS1−a, G) is an extension of Γ<sup>0</sup> and the defect numbers of S are finite, then A<sup>0</sup> = ker Γ<sup>0</sup> = S<sup>F</sup> and {G,Λ} is a boundary pair for S corresponding to S1. If, in this case, also A<sup>1</sup> = S1, then {G, Γ0, Γ1} and {G,Λ} are compatible.

Proof. (i) Let Λ ∈ **B**(HtS1−a, G) be an extension of Γ0. It follows from the extension property (5.6.2) that ran Λ = G and that

$$\text{dom}\,S \subset \text{dom}\,A\_0 \subset \ker \Lambda. \tag{5.6.4}$$

Since Λ ∈ **B**(HtS1−a, G), it is clear that ker Λ is closed in HtS1−<sup>a</sup> and, by (5.6.4), the closure of dom S with respect to the inner product (·, ·)tS1−<sup>a</sup> is contained in ker Λ. On the other hand, dom S ⊂ dom S<sup>F</sup> ⊂ dom tS<sup>F</sup> and the inner product on HtS1−<sup>a</sup> restricted to HtSF−<sup>a</sup> coincides with the inner product (·, ·)tSF−<sup>a</sup> in HtSF−<sup>a</sup> (see the discussion below (5.3.10) and (5.3.11)). As dom S is a core of tS<sup>F</sup> , it follows from Corollary 5.1.15 that the closure of dom S with respect to the inner product (·, ·)tSF−<sup>a</sup> coincides with dom tS<sup>F</sup> . Thus, one concludes dom tS<sup>F</sup> ⊂ ker Λ.

(ii) Assume that Λ ∈ **B**(HtS1−a, G) is an extension of Γ<sup>0</sup> and that (5.6.3) holds. Then

$$A\_0 = \{ \widehat{f} \in S^\* \, : \, \widehat{f} \in \ker \Gamma\_0 \} \subset \{ \widehat{f} \in S^\* \, : \, f \in \ker \Lambda \}.\tag{5.6.5}$$

Since ker Λ = dom tS<sup>F</sup> , it follows from Theorem 5.3.3 that the right-hand side of (5.6.5) concides with SF. Thus, A<sup>0</sup> ⊂ S<sup>F</sup> and since both relations are self-adjoint, it follows that A<sup>0</sup> = SF. Moreover, one has ran Λ = G by (i) and hence {G,Λ} is a boundary pair for S<sup>∗</sup> corresponding to S1. It follows directly from Definition 5.6.4 that the additional assumption A<sup>1</sup> = S<sup>1</sup> yields compatibility of {G, Γ0, Γ1} and {G,Λ}.

For the last statement assume that the defect numbers of S are finite and let Λ ∈ **B**(HtS1−a, G) be an extension of Γ0. Then ran Λ = G and dom tS<sup>F</sup> ⊂ ker Λ by (i). Since S<sup>1</sup> and S<sup>F</sup> are transversal, one has the orthogonal decomposition

$$\mathfrak{H}\_{\mathfrak{t}\_{\mathcal{S}\_1}-a} = \ker \left( S^\* - a \right) \oplus\_{\mathfrak{t}\_{\mathcal{S}\_1}-a} \mathfrak{H}\_{\mathfrak{t}\_{\mathcal{S}\_\mathcal{F}}-a}$$

for a<m(S1) by Proposition 5.3.7 and Theorem 5.3.8. Then

dim ran Λ = dim G = dim ker (S<sup>∗</sup> − a) < ∞

together with dom tS<sup>F</sup> ⊂ ker Λ implies dom tS<sup>F</sup> = ker Λ. Now the assertions follow from (ii). -

If the boundary triplet {G, Γ0, Γ1} and the boundary pair {G,Λ} are compatible, then automatically ker Γ<sup>0</sup> = SF. In the next theorem it will be shown that for a boundary triplet for S<sup>∗</sup> such that ker Γ<sup>0</sup> = S<sup>F</sup> and ker Γ<sup>1</sup> = S<sup>1</sup> is semibounded, there exists a compatible boundary pair {G,Λ} for S corresponding to S1.

**Theorem 5.6.6.** Let S be a closed semibounded relation in H and let {G, Γ0, Γ1} be a boundary triplet for S∗. Assume that

$$
\ker \Gamma\_0 = S\_\mathcal{F} \quad \text{and} \quad \ker \Gamma\_1 = S\_1,
$$

where S<sup>F</sup> is the Friedrichs extension and S<sup>1</sup> is a semibounded self-adjoint extension of S. Then, with a<m(S1) fixed, the mapping

$$\Lambda\_0 = \left\{ \{ f, \Gamma\_0 \ddot{f} \} : \ddot{f} \in S^\* \right\} \subset \mathfrak{H}\_{\mathfrak{t}\_1 - a} \times \mathfrak{G},\tag{5.6.6}$$

is (the graph of) a densely defined bounded operator. Its unique bounded extension Λ to all of HtS1−<sup>a</sup> induces a boundary pair {G,Λ} for S corresponding to S<sup>1</sup> which is compatible with the boundary triplet {G, Γ0, Γ1}.

Proof. The assumption that {G, Γ0, Γ1} is a boundary triplet for S<sup>∗</sup> implies that S<sup>F</sup> and S<sup>1</sup> are transversal extensions of S. By Theorem 5.3.8, this is equivalent to dom S<sup>∗</sup> ⊂ dom tS<sup>1</sup> , and hence dom Λ<sup>0</sup> = dom S<sup>∗</sup> is contained in HtS1−a, i.e., the relation Λ<sup>0</sup> in (5.6.6) is well defined from HtS1−<sup>a</sup> to G.

In order to see that Λ<sup>0</sup> is an operator, assume that {f, Γ0f } ∈ <sup>Λ</sup><sup>0</sup> with <sup>f</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> satisfying f = 0. Hence, f <sup>=</sup> {0, f- } ∈ S∗, which by Theorem 5.3.3 shows that f <sup>∈</sup> <sup>S</sup>F. Now the identity <sup>S</sup><sup>F</sup> = ker Γ<sup>0</sup> implies that Γ0<sup>f</sup> = 0. Hence, mul Λ<sup>0</sup> <sup>=</sup> {0}, that is, Λ<sup>0</sup> in (5.6.6) is an operator. Furthermore, it is clear that S<sup>F</sup> = ker Γ<sup>0</sup> yields ker Λ<sup>0</sup> = dom SF.

Next it will be shown that the operator

$$
\Lambda\_0: \mathfrak{H}\_{\mathfrak{t}\_1 - a} \supset \text{dom} \, S^\* \to \mathfrak{G}, \qquad f \mapsto \Lambda\_0 f = \Gamma\_0 f,\tag{5.6.7}
$$

is bounded. By assumption, a<m(S1) ≤ m(SF), and hence one has the decomposition <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>S</sup><sup>F</sup> <sup>+</sup> <sup>N</sup> <sup>a</sup>(S∗); see Theorem 1.7.1. Let <sup>f</sup> <sup>=</sup> {f,f- } ∈ S<sup>∗</sup> and decompose it accordingly,

$$
\widehat{f} = \widehat{f}\_{\mathcal{F}} + \widehat{f}\_a, \quad \widehat{f}\_{\mathcal{F}} \in S\_{\mathcal{F}}, \ f\_a = \{f\_a, af\_a\} \in \widehat{\mathfrak{N}}\_a(S^\*).
$$

From S<sup>F</sup> = ker Γ<sup>0</sup> and the fact that Γ<sup>0</sup> : S<sup>∗</sup> → G is bounded (with respect to the graph norm; cf. Proposition 2.1.2) it follows that

$$\|\Lambda\_0 f\|^2 = \|\Gamma\_0 \widehat{f}\|^2 = \|\Gamma\_0 \widehat{f}\_a\|^2 \le M(\|f\_a\|^2 + a^2 \|f\_a\|^2) \le M' \|f\_a\|^2$$

holds for some constants M,M- > 0. Then is clear from (5.1.9) that there exists M--> 0 such that

$$\|\|\Lambda\_0 f\|\|^2 \le M'' \|\|f\_a\|\|\_{\mathfrak{t}\_{\mathcal{S}\_1}-a}^2 \le M'' \left(\|\|f\_{\mathcal{F}}\|\|\_{\mathfrak{t}\_{\mathcal{S}\_1}-a}^2 + \|\|f\_a\|\|\_{\mathfrak{t}\_{\mathcal{S}\_1}-a}^2\right) = M'' \|\|f\|\|\_{\mathfrak{t}\_{\mathcal{S}\_1}-a}^2,$$

where f<sup>F</sup> ∈ dom S<sup>F</sup> ⊂ dom t<sup>S</sup><sup>F</sup> ; here in the last equality one uses the orthogonal decomposition

$$\operatorname{dom} \mathbf{t}\_{S\_1} = \ker \left( S^\* - a \right) \oplus\_{\mathbf{t}\_{S\_1} - a} \operatorname{dom} \mathbf{t}\_{S\_{\mathbf{P}}}.$$

with respect to the inner product (·, ·)<sup>t</sup>S1−<sup>a</sup> in the Hilbert space H<sup>t</sup>S1−<sup>a</sup>, see Proposition 5.3.7 and Theorem 5.3.8. This shows that the operator Λ<sup>0</sup> in (5.6.7) is bounded.

By Theorem 5.1.18 (ii), dom S<sup>1</sup> is a core of t<sup>S</sup><sup>1</sup> and hence dom S<sup>1</sup> is dense in the Hilbert space HtS1−<sup>a</sup> by Corollary 5.1.15. As dom S<sup>1</sup> ⊂ dom S<sup>∗</sup> ⊂ dom tS<sup>1</sup> , it follows that dom S<sup>∗</sup> is also a dense subspace of HtS1−a. Therefore, the bounded operator Λ<sup>0</sup> in (5.6.7) is densely defined, and hence Λ<sup>0</sup> admits a unique extension Λ by continuity to all of HtS1−a.

Since the restriction of the inner product in HtS1−<sup>a</sup> to HtSF−<sup>a</sup> coincides with the inner product in HtSF−<sup>a</sup> (see the discussion below (5.3.10) and (5.3.11)), the closure of dom S<sup>F</sup> = ker Λ<sup>0</sup> in HtS1−<sup>a</sup> is dom tS<sup>F</sup> . This implies ker Λ = dom tS<sup>F</sup> . Finally, by definition ran Λ = ran Λ<sup>0</sup> = G, and hence {G,Λ} is a boundary pair for S corresponding to S1. It is also clear that the boundary pair {G,Λ} and the boundary triplet {G, <sup>Γ</sup>0, <sup>Γ</sup>1} are compatible. -

In the next corollary it is shown that in the context of Theorem 5.6.6 the continuity of Λ : HtS1−<sup>a</sup> → G makes it possible to extend the identity

$$\mathbf{t}\left(f',g\right) = \mathbf{t}\_{\mathcal{S}\_1}[f,g] + (\Gamma\_1 f, \Gamma\_0 \widehat{g}), \quad f, \widehat{g} \in S^\*,\tag{5.6.8}$$

in Theorem 5.5.14 to f <sup>∈</sup> <sup>S</sup><sup>∗</sup> and <sup>g</sup> <sup>∈</sup> dom <sup>t</sup>S<sup>1</sup> .

**Corollary 5.6.7.** Let S be a closed semibounded relation in H, and let {G, Γ0, Γ1} and {G,Λ} be the boundary triplet and boundary pair in Theorem 5.6.6, respectively. Then the following equality holds:

$$(f',g) = \mathfrak{t}\_{\mathcal{S}\_1}[f,g] + (\Gamma\_1 \dot{f}, \Lambda g), \quad \dot{f} \in S^\*, \ g \in \text{dom } \mathfrak{t}\_{\mathcal{S}\_1}.$$

Proof. As dom S<sup>∗</sup> is a dense subspace of HtS1−a, there exist g<sup>n</sup> ∈ dom S<sup>∗</sup> and <sup>g</sup><sup>n</sup> <sup>=</sup> {gn, g- <sup>n</sup>} ∈ <sup>S</sup><sup>∗</sup> such that <sup>g</sup><sup>n</sup> <sup>→</sup> <sup>g</sup> in <sup>H</sup>tS1−a, and hence Γ0g<sup>n</sup> = Λg<sup>n</sup> <sup>→</sup> <sup>Λ</sup><sup>g</sup> in <sup>G</sup>. Furthermore, g<sup>n</sup> → g in HtS1−<sup>a</sup> also implies g<sup>n</sup> → g ∈ H and tS<sup>1</sup> [f,gn] → tS<sup>1</sup> [f,g] for f <sup>=</sup> {f,f- } ∈ S∗. By (5.6.8), the identity

$$\mathbf{t}(f', g\_n) = \mathbf{t}\_{S\_1}[f, g\_n] + (\Gamma\_1 \dot{f}, \Gamma\_0 \hat{g}\_n) = \mathbf{t}\_{S\_1}[f, g\_n] + (\Gamma\_1 \dot{f}, \Lambda g\_n)$$

holds for f <sup>=</sup> {f,f- } and <sup>g</sup><sup>n</sup> <sup>=</sup> {gn, g- <sup>n</sup>} ∈ S∗. Now the assertion follows by taking limits. -

For the sake of completeness the existence of boundary pairs is stated in the following corollary as an addendum to Definition 5.6.1.

**Corollary 5.6.8.** Let S be a closed semibounded relation in H. Then there exist a semibounded self-adjoint extension S<sup>1</sup> of S such that S<sup>1</sup> and S<sup>F</sup> are transversal and a mapping Λ ∈ **B**(H<sup>t</sup>S1−<sup>a</sup>, G), a<m(S1), such that {G,Λ} is a boundary pair for S.

Proof. By Proposition 5.5.10, there exists a boundary triplet {G, Γ0, Γ1} for S<sup>∗</sup> such that S<sup>F</sup> = ker Γ<sup>0</sup> and SK,x = ker Γ<sup>1</sup> for x<m(S) = m(SF). Now the statement follows with S<sup>1</sup> = SK,x and a<m(S1) = x from Theorem 5.6.6. -

**Example 5.6.9.** Let S be a closed semibounded relation in H with lower bound γ, fix x<γ, and consider the boundary triplet {Nx(S∗), Γ0, Γ1} for S<sup>∗</sup> in Corollary 5.5.12 with

$$
\Gamma\_0 \dot{f} = f\_x, \qquad \dot{f} = \{f\_\mathcal{F}, f\_\mathcal{F}'\} + \{f\_x, xf\_x\} \in S\_\mathcal{F} + \dot{\mathfrak{N}}\_x(S^\*).
$$

Then S<sup>F</sup> = ker Γ<sup>0</sup> and SK,x = ker Γ<sup>1</sup> and for a<x one has the direct sum decomposition

$$\mathfrak{H}\_{\mathfrak{t}\_{\mathcal{S}\_{\mathcal{K},x}}-a} = \operatorname{dom} \mathfrak{t}\_{\mathcal{S}\mathfrak{K},x} = \operatorname{dom} \mathfrak{t}\_{\mathcal{S}\mathfrak{p}} + \mathfrak{N}\_x(S^\*), \quad a < x < \gamma;\tag{5.6.9}$$

cf. Corollary 5.4.16. Then the mapping

$$
\Lambda f = f\_x, \qquad f = f\_{\mathbb{F}} + f\_x \in \text{dom } \mathfrak{t}\_{S^{\mathbb{F}}} + \mathfrak{N}\_x(S^\*),
$$

belongs to **B**(HtSK,x−a, Nx(S∗)). In fact, let f ∈ HtSK,x−<sup>a</sup> have the decomposition <sup>f</sup> <sup>=</sup> <sup>f</sup><sup>F</sup> <sup>+</sup> <sup>f</sup><sup>x</sup> as in (5.6.9), and define <sup>f</sup><sup>a</sup> = (<sup>I</sup> + (<sup>a</sup> <sup>−</sup> <sup>x</sup>)(S<sup>F</sup> <sup>−</sup> <sup>a</sup>)−1)fx. Then f = g<sup>F</sup> + fa, where g<sup>F</sup> = f<sup>x</sup> − f<sup>a</sup> + f<sup>F</sup> ∈ dom tS<sup>F</sup> and f<sup>a</sup> ∈ Na(S∗). Now observe that <sup>f</sup><sup>x</sup> = (<sup>I</sup> + (x−a)(SF−x)−1)fa, so that Proposition 1.4.6 leads to the estimate

$$\|f\_x\| \le \frac{\gamma - a}{\gamma - x} \|f\_a\|.$$

Recall from (5.1.9) (with t = tSK,x , γ = x, ϕ = fa) and (5.4.28) that

$$\|(x-a)\|\|f\_a\|\|^2 \le \|f\_a\|\|\_{\mathfrak{t}\_{\mathcal{S}\_{\mathbf{K},x}}-a}^2 \le \|f\_a\|\|\_{\mathfrak{t}\_{\mathcal{S}\_{\mathbf{K},x}}-a}^2 + \|g\_{\mathcal{F}}\|\|\_{\mathfrak{t}\_{\mathcal{S}\_{\mathbf{K},x}}-a}^2 = \|f\|\|\_{\mathfrak{t}\_{\mathcal{S}\_{\mathbf{K},x}}-a}^2,$$

which proves that Λ ∈ **B**(HtSK,x−a, Nx(S∗)). Thus, Λ extends Γ<sup>0</sup> in the sense of Definition 5.6.3. It is clear that dom tS<sup>F</sup> = ker Λ, and hence Lemma 5.6.5 (ii) implies that {Nx(S∗),Λ} is a boundary pair for S corresponding to SK,x which is compatible with the boundary triplet {Nx(S∗), Γ0, Γ1} in Corollary 5.5.12.

Theorem 5.6.6 admits a converse. If S is a semibounded relation and {G,Λ} is a boundary pair for S in the sense of Definition 5.6.1, then there exists a compatible boundary triplet {G, Γ0, Γ1} for S∗. The construction of the mapping Γ<sup>0</sup> : S<sup>∗</sup> → G is inspired by Lemma 5.6.5 and the construction of Γ<sup>1</sup> : S<sup>∗</sup> → G is inspired by the first Green formula in Theorem 5.5.14.

**Theorem 5.6.10.** Let S be a closed semibounded relation in H and let S<sup>1</sup> be a semibounded self-adjoint extension of S such that S<sup>1</sup> and S<sup>F</sup> are transversal. Let {G,Λ} be a boundary pair for S corresponding to S1. Then

$$\Gamma\_0 = \left\{ \{ \ddot{f}, \Lambda f \} \; ; \; \dot{f} \in S^\* \right\} \tag{5.6.10}$$

is (the graph of) a linear operator from S<sup>∗</sup> to G and there exists a unique linear operator Γ<sup>1</sup> : S<sup>∗</sup> → G such that {G, Γ0, Γ1} defines a boundary triplet for S<sup>∗</sup> which is compatible with the boundary pair {G,Λ} for S corresponding to S1.

Proof. The relations S<sup>1</sup> and S<sup>F</sup> are semibounded self-adjoint extensions of S and hence m(S1) ≤ m(SF) = m(S). There are the following decompositions of the relation S∗:

$$S^\* = S\_\mathcal{F} \hat{+} \hat{\mathcal{N}}\_a(S^\*), \quad a < m(S\_\mathcal{F}), \tag{5.6.11}$$

and, likewise,

$$S^\* = S\_1 \stackrel{\frown}{+} \hat{\mathfrak{N}}\_a(S^\*), \quad a < m(S\_1);\tag{5.6.12}$$

cf. Theorem 1.7.1. Recall that tS<sup>F</sup> ⊂ tS<sup>1</sup> and, since S<sup>1</sup> and S<sup>F</sup> are transversal, there is the orthogonal decomposition

$$\operatorname{dom} \mathfrak{t}\_{\mathcal{S}\_1} = \ker \left( S^\* - a \right) \oplus\_{\mathfrak{t}\_{\mathcal{S}\_1} - a} \operatorname{dom} \mathfrak{t}\_{\mathcal{S}\_\mathcal{P}}, \quad a < m(S\_1), \tag{5.6.13}$$

of the Hilbert space HtS1−a. Moreover, in this case one also has dom S<sup>∗</sup> ⊂ dom tS<sup>1</sup> ; cf. Proposition 5.3.7 and Theorem 5.3.8. The proof will be given in a number of steps. The mapping Γ<sup>0</sup> is considered in Step 1. Step 2 and Step 3 are preparations for the construction of Γ<sup>1</sup> in Step 4. In the remaining steps the various properties of Γ<sup>1</sup> are established.

Step 1. This step concerns the properties of Γ<sup>0</sup> in (5.6.10). Since dom S<sup>∗</sup> ⊂ dom tS<sup>1</sup> one sees from Definition 5.6.1 that the relation Γ<sup>0</sup> is well defined. It is clear that Γ<sup>0</sup> is the graph of an operator,

$$
\Gamma\_0: S^\* \to \mathcal{G}, \qquad \dot{f} \mapsto \Gamma\_0 \dot{f} = \Lambda f,\tag{5.6.14}
$$

and that {0} × mul S<sup>∗</sup> ⊂ ker Γ0. Furthermore,

$$S\_{\mathbb{F}} = \left\{ \widehat{f} \in S^\* : f \in \text{dom } \mathfrak{t}\_{\mathbb{F}} \right\} = \left\{ \widehat{f} \in S^\* : f \in \text{ker}\,\Lambda \right\} = \ker \Gamma\_0,\tag{5.6.15}$$

where the first equality holds by Theorem 5.3.3, the second equality is due to dom tS<sup>F</sup> = ker Λ, and the third equality follows from (5.6.14).

Since ran Λ = G and ker Λ = dom tS<sup>F</sup> , it follows from (5.6.13) that Λ maps ker (S<sup>∗</sup> − a) bijectively onto G. Therefore,

$$
\Gamma\_0 \text{ is a bijection between } \mathfrak{N}\_a(S^\*) \text{ and } \mathfrak{G}, \tag{5.6.16}
$$

and, in particular,

$$
\tan \Gamma\_0 = \mathcal{G}.\tag{5.6.17}
$$

Step 2. Now it will be shown that the identity

$$\operatorname{At}\_{\mathcal{S}\_1}[f, g\_\mathcal{F}] = (f', g\_\mathcal{F}), \quad \widehat{f} \in S^\*, \, g\_\mathcal{F} \in \operatorname{dom} \mathfrak{t}\_{\mathcal{S}\_\mathcal{F}},\tag{5.6.18}$$

holds. For this, assume that f is decomposed as

$$
\widehat{f} = \widehat{f}\_{\mathcal{F}} + \widehat{h}\_a, \qquad \widehat{f}\_{\mathcal{F}} \in S\_{\mathcal{F}}, \ \widehat{h}\_a \in \widehat{\mathfrak{N}}\_a(S^\*); \tag{5.6.19}
$$

cf. (5.6.11). Recall that dom S<sup>∗</sup> ⊂ dom t<sup>S</sup><sup>1</sup> and observe that with (5.6.19) one gets

$$\mathfrak{t}\_{S\_1}[f, g\_\mathcal{F}] = \mathfrak{t}\_{S\_1}[f\_\mathcal{F} + h\_a, g\_\mathcal{F}] = \mathfrak{t}\_{S\_\mathcal{F}}[f\_\mathcal{F}, g\_\mathcal{F}] + \mathfrak{t}\_{S\_1}[h\_a, g\_\mathcal{F}].\tag{5.6.20}$$

The orthogonal decomposition in (5.6.13) gives

$$0 = (h\_a, g\_\mathcal{F})\_{\mathfrak{t}\_{\mathcal{S}\_1} - a} = \mathfrak{t}\_{\mathcal{S}\_1}[h\_a, g\_\mathcal{F}] - a(h\_a, g\_\mathcal{F}).$$

Hence, (5.6.20) leads to the identity

$$\mathfrak{t}\_{\mathcal{S}\_{\mathcal{I}}}[f, g\_{\mathcal{F}}] = \mathfrak{t}\_{\mathcal{S}\_{\mathcal{F}}}[f\_{\mathcal{F}}, g\_{\mathcal{F}}] + a(h\_a, g\_{\mathcal{F}}) = (f'\_{\mathcal{F}}, g\_{\mathcal{F}}) + a(h\_a, g\_{\mathcal{F}}),$$

which shows (5.6.18).

Step 3. Next it will be shown that

$$\mathfrak{t}\_{\mathcal{S}\_1}[f,g] - (f',g) = (f\_a, g\_a)\_{\mathfrak{t}\_{\mathcal{S}\_1} - a}, \quad \bar{f}, \widehat{g} \in S^\*,\tag{5.6.21}$$

where f and <sup>g</sup> are decomposed as

$$
\widehat{f} = \widehat{f}\_1 + \widehat{f}\_a, \qquad \widehat{f}\_1 \in S\_1, \ \widehat{f}\_a \in \widehat{\mathfrak{N}}\_a(S^\*), \tag{5.6.22}
$$

and

$$
\widehat{g} = \widehat{g}\_{\mathcal{F}} + \widehat{g}\_a, \qquad \widehat{g}\_{\mathcal{F}} \in S\_{\mathcal{F}}, \ \widehat{g}\_a \in \widehat{\mathfrak{N}}\_a(S^\*); \tag{5.6.23}
$$

cf. (5.6.12) and (5.6.11). For this note first that with (5.6.22) the identity (5.6.18) in Step 2 gives

$$\mathbf{t}\_{\mathrm{S}\_{1}}[f\_{1} + f\_{a}, g\_{\mathrm{F}}] = (f\_{1}^{\prime} + af\_{a}, g\_{\mathrm{F}}).\tag{5.6.24}$$

Furthermore, note that with g<sup>a</sup> from (5.6.23) one has

$$\text{At}\_{S\_1}[f\_1, g\_a] = (f'\_1, g\_a) \tag{5.6.25}$$

due to f <sup>1</sup> ∈ S<sup>1</sup> and g<sup>a</sup> ∈ Na(S∗) ⊂ dom tS<sup>1</sup> ; cf. Theorem 5.1.18. A combination of (5.6.24) and (5.6.25) leads to

$$\begin{aligned} \mathfrak{t}\_{S\_1}[f,g] - (f',g) &= \mathfrak{t}\_{S\_1}[f\_1 + f\_a, g\_\mathcal{F} + g\_a] - (f'\_1 + af\_a, g\_\mathcal{F} + g\_a) \\ &= \mathfrak{t}\_{S\_1}[f\_1 + f\_a, g\_a] - (f'\_1 + af\_a, g\_a) \\ &= \mathfrak{t}\_{S\_1}[f\_a, g\_a] - a(f\_a, g\_a), \end{aligned}$$

which gives (5.6.21).

Step 4. In this step the operator Γ<sup>1</sup> : S<sup>∗</sup> → G will be constructed. For this purpose fix f <sup>∈</sup> <sup>S</sup><sup>∗</sup> and consider the linear relation

$$\Phi\_{\widehat{f}} = \left\{ \{ \Gamma \bullet \widehat{g}, (g, f') - \text{ts}\_1[g, f] \} \, : \, \widehat{g} \in S^\* \right\}. \tag{5.6.26}$$

It follows from ran Γ<sup>0</sup> = G in (5.6.17) that dom Φ<sup>f</sup> - = G. Next it will be shown that Φf is the graph of a bounded linear functional. If f and <sup>g</sup> are decomposed as in (5.6.22) and (5.6.23), then it follows from (5.6.21) in Step 3 that

$$\left| \left| (g, f') - \mathfrak{t}\_{\mathbb{S}\_1}[g, f] \right| \right| = \left| \mathfrak{t}\_{\mathbb{S}\_1}[f, g] - (f', g) \right| = \left| (f\_a, g\_a)\_{\mathfrak{t}\_{\mathbb{S}\_1} - a} \right| \le \| f\_a \|\_{\mathfrak{t}\_{\mathbb{S}\_1} - a} \| g\_a \|\_{\mathfrak{t}\_{\mathbb{S}\_1} - a} \cdot \left| \left| \mathfrak{t}\_{\mathbb{S}\_1} - a \right| \right| $$

Recall that the restriction of Λ to ker (S<sup>∗</sup> −a) has a bounded inverse (with respect to the norm · <sup>t</sup>S1−<sup>a</sup> on ker (S<sup>∗</sup> − a)). Therefore, by (5.6.10) and (5.6.15),

$$\|\|g\_a\|\|\_{\mathfrak{t}\_1-a} \le C \|\Lambda g\_a\| = C \|\Gamma\_0 \hat{g}\_a\| = C \|\Gamma\_0 \hat{g}\|\tag{5.6.27}$$

for some constant C > 0 and, as a consequence,

$$\left| (g, f') - \mathfrak{ts}\_1[g, f] \right| \le C \| |f\_a| \|\_{\mathfrak{ts}\_1 - a} \| \Gamma \widehat{g} \| \|, \qquad \widehat{g} \in S^\*.$$

This implies that the relation Φ<sup>f</sup> in (5.6.26) is the graph of an everywhere defined bounded functional. Hence, by the Riesz representation theorem, there exists a unique ϕ<sup>f</sup> -∈ G such that

$$\Phi\_{\widehat{f}}\left(\Gamma\_0\widehat{g}\right) = \left(\Gamma\_0\widehat{g}, \varphi\_{\widehat{f}}\right), \quad \widehat{g} \in S^\*.$$

Define the mapping Γ<sup>1</sup> by

$$
\Gamma\_1: S^\* \to \mathcal{G}, \qquad \widehat{f} \mapsto \Gamma\_1 \widehat{f} := \varphi\_{\widehat{f}}.\tag{5.6.28}
$$

By construction, Γ<sup>1</sup> is linear and it follows from (5.6.21) and (5.6.26) that

$$(\Gamma\_1 \ddot{f}, \Gamma\_0 \hat{g}) = (f', g) - \mathfrak{t}\_{\mathbb{S}\_1}[f, g] = -(f\_a, g\_a)\mathfrak{t}\_{\mathbb{S}\_1 - a} \tag{5.6.29}$$

for all f <sup>∈</sup> <sup>S</sup><sup>∗</sup> and <sup>g</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> decomposed in the forms (5.6.22) and (5.6.23), respectively.

Step 5. It will be shown that the operator Γ1, constructed in Step 4, satisfies

$$S\_1 = \ker \Gamma\_1. \tag{5.6.30}$$

To show that S<sup>1</sup> ⊂ ker Γ1, assume that f <sup>∈</sup> <sup>S</sup>1. Then (5.6.29) in Step 4 implies that (Γ1f, <sup>Γ</sup>0g) = 0 for all <sup>g</sup> <sup>∈</sup> <sup>S</sup>∗, and since Γ<sup>0</sup> is surjective (see (5.6.17)), one concludes that Γ1f = 0. Thus, <sup>S</sup><sup>1</sup> <sup>⊂</sup> ker Γ1. To show the reverse inclusion, assume that f <sup>=</sup> {f,f- } ∈ ker Γ1. Then it follows from (5.6.29) that

$$\operatorname{tr}\_{S\_1}[f,g] = (f',g) \quad \text{for all} \quad g \in \operatorname{dom} S\_1 \subset \operatorname{dom} S^\*.$$

Since dom S<sup>1</sup> is a core of tS<sup>1</sup> , it is a consequence of the first representation theorem (Theorem 5.1.18) that f <sup>=</sup> {f,f- } ∈ S1. Thus, ker Γ<sup>1</sup> ⊂ S1, and so (5.6.30) has been proved.

Step 6. Next it will be shown that the operator Γ1, constructed in Step 4, satisfies

$$
\tan \Gamma\_1 = \mathcal{G}.\tag{5.6.31}
$$

For this purpose note first that ran Γ<sup>1</sup> = G. In fact, if ran Γ<sup>1</sup> = G, then in view of (5.6.16) there exists <sup>g</sup><sup>a</sup> <sup>∈</sup> <sup>N</sup> <sup>a</sup>(S∗) such that Γ0g<sup>a</sup> = 0 and

$$(\Gamma\_1 \ddot{f}, \Gamma\_0 \hat{g}\_a) = 0 \quad \text{for all} \quad \dot{f} \in S^\*. \tag{5.6.32}$$

Now apply (5.6.29) with f <sup>=</sup> <sup>f</sup> <sup>1</sup>+f <sup>a</sup> ∈ S∗, f <sup>1</sup> ∈ S1, f <sup>a</sup> <sup>∈</sup> <sup>N</sup> <sup>a</sup>(S∗), and <sup>g</sup><sup>a</sup> <sup>∈</sup> <sup>N</sup> <sup>a</sup>(S∗). Then (5.6.32) implies

$$(f\_a, g\_a)\_{\mathfrak{t}\_1-a} = 0$$

for all <sup>f</sup><sup>a</sup> <sup>∈</sup> <sup>N</sup>a(S∗). Therefore, <sup>g</sup><sup>a</sup> = 0 and hence <sup>g</sup><sup>a</sup> = 0 and Γ0g<sup>a</sup> = 0, which is a contradiction. Thus, ran Γ<sup>1</sup> = G.

To conclude (5.6.31), it suffices to show that ran Γ<sup>1</sup> is closed. For this consider the restriction Γ- <sup>1</sup> to <sup>N</sup> <sup>a</sup>(S∗). It follows from <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>S</sup><sup>1</sup> <sup>+</sup> <sup>N</sup> <sup>a</sup>(S∗) and (5.6.30) that Γ- <sup>1</sup> is injective and that

$$
\tan \Gamma\_1' = \text{ran} \,\Gamma\_1.\tag{5.6.33}
$$

With the inner product (·, ·)tS1−<sup>a</sup> the space <sup>N</sup> <sup>a</sup>(S∗) is a closed subspace of the Hilbert space HtS1−<sup>a</sup> × HtS1−<sup>a</sup> (see (5.6.13)). Since

$$\left| \left( \Gamma\_1' \dot{f}\_a, \Gamma\_0 \widehat{g} \right) \right| = \left| (f\_a, g\_a)\_{\mathfrak{t}\_{\mathcal{S}\_1} - a} \right| \le C \| f\_a \|\_{\mathfrak{t}\_{\mathcal{S}\_1} - a} \| \Gamma\_0 \widehat{g} \|\_{\mathfrak{t}\_a}$$

by (5.6.29) and (5.6.27), it follows from

$$\left| \| \Gamma\_1' \widehat{f}\_a \| \right| = \sup\_{\| \Gamma\_0 \widehat{g} \| \| = 1} \left| (\Gamma\_1 \widehat{f}\_a, \Gamma\_0 \widehat{g}) \right| \le C \| f\_a \|\_{\mathfrak{t}\_{\mathbb{S}\_1} - a}$$

that the operator Γ- <sup>1</sup> is bounded in the topology of HtS1−<sup>a</sup> × HtS1−a. Hence, Γ- <sup>1</sup> is closed and the same is true for the inverse operator

$$(\Gamma\_1')^{-1} : \mathcal{G} \supset \text{ran} \, \Gamma\_1' \to \widehat{\mathfrak{N}}\_a(S^\*).$$

Assume that (Γ- <sup>1</sup>)−<sup>1</sup> is unbounded. Then there exists a sequence (gn) in <sup>N</sup> <sup>a</sup>(S∗) such that gn <sup>t</sup>S1−<sup>a</sup> = 1 and Γ- <sup>1</sup>g<sup>n</sup> <sup>→</sup> 0 in <sup>G</sup>. From (5.6.29) and the definition of Γ<sup>0</sup> one obtains

$$1 = (g\_n, g\_n)\_{\mathfrak{t}\_1 - a} = - (\Gamma\_1' \widehat{g}\_n, \Gamma\_0 \widehat{g}\_n) = - (\Gamma\_1' \widehat{g}\_n, \Lambda g\_n) \le ||\Gamma\_1' \widehat{g}\_n|| ||\Lambda g\_n||,$$

and as Λ : HtS1−<sup>a</sup> → G is bounded this yields

$$1 \le C' \|\Gamma\_1' \widehat{g}\_n\| \|\|g\_n\|\|\_{\mathfrak{t}\_{\mathbb{S}\_1} - a} = C' \|\Gamma\_1' \widehat{g}\_n\| \to 0;$$

a contradiction. Hence, the operator (Γ- 1)−<sup>1</sup> is bounded. As (Γ- <sup>1</sup>)−<sup>1</sup> is closed, it follows that ran Γ- <sup>1</sup> = dom (Γ- <sup>1</sup>)−<sup>1</sup> is closed, which together with (5.6.33) and ran Γ<sup>1</sup> = G shows (5.6.31).

Step 7. First it will be verified that the mappings Γ<sup>0</sup> and Γ<sup>1</sup> form a boundary triplet for S∗. Observe that (5.6.29) implies the Green identity

$$(f',g) - (f,g') = (\Gamma\_1 \widehat{f}, \Gamma\_0 \widehat{g}) - (\Gamma\_0 \widehat{f}, \Gamma\_1 \widehat{g}), \quad \widehat{f}, \widehat{g} \in S^\*.$$

It remains to show that

$$\text{ran}\begin{pmatrix}\Gamma\_0\\\Gamma\_1\end{pmatrix} = \mathcal{G} \times \mathcal{G}.\tag{5.6.34}$$

For this, let ϕ, ϕ- ∈ G. From ran Γ<sup>0</sup> = G in (5.6.17) and ran Γ<sup>1</sup> = G in (5.6.31) it is clear that there exist h, <sup>k</sup> <sup>∈</sup> <sup>S</sup><sup>∗</sup> such that Γ0 <sup>h</sup> <sup>=</sup> <sup>ϕ</sup> and Γ1<sup>k</sup> <sup>=</sup> <sup>ϕ</sup>- . It follows from the transversality <sup>S</sup><sup>∗</sup> <sup>=</sup> <sup>S</sup><sup>F</sup> <sup>+</sup> <sup>S</sup><sup>1</sup> that

$$
\widehat{h} = \widehat{h}\_{\mathcal{F}} + \widehat{h}\_{1} \quad \text{and} \quad \widehat{k} = \widehat{k}\_{\mathcal{F}} + \widehat{k}\_{1}, \quad \widehat{h}\_{\mathcal{F}}, \widehat{k}\_{\mathcal{F}} \in S\_{\mathcal{F}}, \widehat{h}\_{1}, \widehat{k}\_{1} \in S\_{1}.
$$

Define f := <sup>h</sup><sup>1</sup> <sup>+</sup> k<sup>F</sup> <sup>∈</sup> <sup>S</sup>∗. Making use of the facts that ker Γ<sup>0</sup> <sup>=</sup> <sup>S</sup><sup>F</sup> in (5.6.15) and ker Γ<sup>1</sup> = S<sup>1</sup> in (5.6.30), one obtains

$$\begin{aligned} \Gamma\_0 \widehat{f} &= \Gamma\_0 \widehat{h}\_1 = \Gamma\_0 \widehat{h} = \varphi, \\ \Gamma\_1 \widehat{f} &= \Gamma\_1 \widehat{k}\_{\mathcal{F}} = \Gamma\_1 \widehat{k} = \varphi', \end{aligned}$$

which shows (5.6.34). Therefore, {G, Γ0, Γ1} is a boundary triplet for S∗.

Since ker Γ<sup>0</sup> = S<sup>F</sup> and ker Γ<sup>1</sup> = S1, and since Λ is an extension of Γ0, see (5.6.10), one concludes that the boundary triplet {G, Γ0, Γ1} and the boundary pair {G,Λ} are compatible; see Definition 5.6.4.

It remains to check that Γ<sup>1</sup> constructed in (5.6.28)–(5.6.29) is uniquely determined. Note that the mapping Γ<sup>0</sup> and the kernel S<sup>1</sup> of Γ<sup>1</sup> are uniquely determined as the boundary triplet is required to be compatible with the boundary pair {G,Λ}. Under these circumstances the action of Γ<sup>1</sup> is uniquely determined by formula (5.5.27) in Theorem 5.5.14. -

The following result gives a connection via a boundary pair {G,Λ} between closed semibounded forms t<sup>H</sup> corresponding to semibounded self-adjoint extensions H of S such that S<sup>1</sup> ≤ H ≤ S<sup>F</sup> and closed nonnegative forms ω in G. A similar result also involving boundary triplets follows later.

**Theorem 5.6.11.** Let S be a closed semibounded relation in H and let S<sup>1</sup> be a semibounded self-adjoint extension of S such that S<sup>1</sup> and S<sup>F</sup> are transversal. Let {G,Λ} be a boundary pair for S corresponding to S1. Then the following statements hold:

(i) If H is a semibounded self-adjoint extension of S such that S<sup>1</sup> ≤ H, then there exists a closed nonnegative form ω in G defined on dom ω = Λ(dom tH) such that

$$\mathbf{t}\_H[f,g] = \mathbf{t}\_{S\_1}[f,g] + \omega[\Lambda f, \Lambda g], \qquad f, g \in \text{dom}\,\mathbf{t}\_H. \tag{5.6.35}$$

Moreover, the space Λ(dom H) is a core of the form ω.

(ii) If ω is a closed nonnegative form in G, then

$$\begin{aligned} \mathsf{t}[f,g] &= \mathsf{t}\_{S\_1}[f,g] + \omega[\Lambda f, \Lambda g], \\ \mathsf{dom}\,\mathsf{t} &= \left\{ f \in \mathsf{dom}\,\mathsf{t}\_{S\_1} : \Lambda f \in \mathsf{dom}\,\omega \right\}, \end{aligned} \tag{5.6.36}$$

is a closed semibounded form in H and the corresponding self-adjoint relation H is a semibounded self-adjoint extension of S which satisfies S<sup>1</sup> ≤ H.

The formulas (5.6.35) and (5.6.36) establish a one-to-one correspondence between all closed nonnegative forms ω in G and all semibounded self-adjoint extensions H of S satisfying the inequalities S<sup>1</sup> ≤ H ≤ SF.

Proof. (i) Let H be a semibounded self-adjoint extension of S and let t<sup>H</sup> be the corresponding closed semibounded form. By assumption, S<sup>1</sup> ≤ H or, equivalently, tS<sup>1</sup> ≤ tH; cf. Theorem 5.2.4. Hence, dom t<sup>H</sup> ⊂ dom tS<sup>1</sup> and tS<sup>1</sup> [f] ≤ tH[f] for all f ∈ dom tH. Recall that tS<sup>F</sup> , as the closure of tS, satisfies tS<sup>F</sup> ⊂ tS<sup>1</sup> and tS<sup>F</sup> ⊂ tH. Since ker Λ = dom tS<sup>F</sup> , one concludes that the form

$$\omega[\Lambda f, \Lambda g] := \mathbf{t}\_H[f, g] - \mathbf{t}\_{S\_1}[f, g], \quad \text{dom}\,\omega = \Lambda(\text{dom}\,\mathbf{t}\_H), \quad f, g \in \text{dom}\,\mathbf{t}\_H,\tag{5.6.37}$$

is well defined and nonnegative in the Hilbert space G. To see that it is well defined, just note that for f,g ∈ dom t<sup>H</sup> the Cauchy–Schwarz inequality shows

$$\left| \mathsf{t}\_{H}[f,g] - \mathsf{t}\_{S\_{1}}[f,g] \right| \leq \left| \mathsf{t}\_{H}[f,f] - \mathsf{t}\_{S\_{1}}[f,f] \right|^{\frac{1}{2}} \left| \mathsf{t}\_{H}[g,g] - \mathsf{t}\_{S\_{1}}[g,g] \right|^{\frac{1}{2}},$$

and hence tH[f,g] − tS<sup>1</sup> [f,g] in (5.6.37) vanishes when either f or g belongs to ker Λ = dom tS<sup>F</sup> .

Next it will be shown that the form ω is closed in G. To this end consider a sequence (ϕn) in dom ω and assume that ϕ<sup>n</sup> →<sup>ω</sup> ϕ for some ϕ ∈ G, that is, (ϕn) is a sequence in dom ω = Λ(dom tH), such that

$$
\varphi\_n \to \varphi \in \mathfrak{G} \qquad \text{and} \qquad \omega[\varphi\_n - \varphi\_m] \to 0. \tag{5.6.38}
$$

Since ker Λ = dom tS<sup>F</sup> and

$$\operatorname{dom} \mathbf{t}\_{\mathrm{Sp}} \subset \operatorname{dom} \mathbf{t}\_H \subset \operatorname{dom} \mathbf{t}\_{\mathrm{S}\_1} = \left( \operatorname{dom} \mathbf{t}\_{\mathrm{S}\_1} \ominus\_{\mathbf{t}\_{\mathrm{S}\_1} - a} \operatorname{dom} \mathbf{t}\_{\mathrm{Sp}} \right) \oplus\_{\mathbf{t}\_{\mathrm{S}\_1} - a} \operatorname{dom} \mathbf{t}\_{\mathrm{Sp}}$$

for a<m(S1), there exists a sequence (fn) in dom t<sup>H</sup> tS1−<sup>a</sup> dom tS<sup>F</sup> such that Λf<sup>n</sup> = ϕn. Moreover, since ran Λ = G, there exists f ∈ dom tS<sup>1</sup> tS1−<sup>a</sup> dom tS<sup>F</sup> such that Λf = ϕ; see Proposition 5.3.7. Since the restriction of Λ to the space dom tS<sup>1</sup> tS1−<sup>a</sup> dom tS<sup>F</sup> has a bounded inverse (see the discussion following Definition 5.6.1), it follows that f<sup>n</sup> → f in HtS1−a. In particular, f<sup>n</sup> → f in H and tS<sup>1</sup> [f<sup>n</sup> − fm] → 0. Then (5.6.37) and (5.6.38) imply

$$\begin{aligned} \mathbf{t}\_H[f\_n - f\_m] &= \mathbf{t}\_{S\_1}[f\_n - f\_m] + \omega[\Lambda f\_n - \Lambda f\_m] \\ &= \mathbf{t}\_{S\_1}[f\_n - f\_m] + \omega[\varphi\_n - \varphi\_m] \to 0, \end{aligned}$$

and as t<sup>H</sup> is closed one concludes f ∈ dom t<sup>H</sup> and tH[f<sup>n</sup> − f] → 0. This implies ϕ = Λf ∈ dom ω. Furthermore, as t<sup>S</sup><sup>1</sup> is closed, also t<sup>S</sup><sup>1</sup> [f<sup>n</sup> − f] → 0, and hence

$$
\omega[\varphi\_n - \varphi] = \omega[\Lambda f\_n - \Lambda f] = \mathfrak{t}\_H[f\_n - f] - \mathfrak{t}\_{S\_1}[f\_n - f] \to 0,
$$

so that ω is a closed form in G. It is clear that the definition of ω in (5.6.37) implies the representation of t<sup>H</sup> in (i).

It remains to show that Λ(dom H) is a core of ω. For this let ϕ ∈ dom ω and choose f ∈ dom t<sup>H</sup> such that ϕ = Λf. As dom H is a core of tH, there exists a sequence (fn) in dom H such that f<sup>n</sup> → f in H and tH[f<sup>n</sup> − f] → 0. Then 0 ≤ (tS<sup>1</sup> − a)[f<sup>n</sup> − f] ≤ (t<sup>H</sup> − a)[f<sup>n</sup> − f] → 0 and, in particular, one has f<sup>n</sup> → f in HtS1−a. Setting ϕ<sup>n</sup> := Λf<sup>n</sup> one has ϕ<sup>n</sup> ⊂ Λ(dom H) and using the fact that Λ is bounded one concludes that

$$
\varphi\_n = \Lambda f\_n \to \Lambda f = \varphi.
$$

and

$$
\omega[\varphi\_n - \varphi] = \omega[\Lambda f\_n - \Lambda f] = \mathfrak{t}\_H[f\_n - f] - \mathfrak{t}\_{S\_1}[f\_n - f] \to 0.
$$

This shows that Λ(dom H) is a core of ω.

(ii) Assume that ω is a closed nonnegative form in G. Then it is clear that the form

$$\mathfrak{t}[f,g] = \mathfrak{t}\_{S\_1}[f,g] + \omega[\Lambda f, \Lambda g] \tag{5.6.39}$$

defined on dom <sup>t</sup> = Λ−1(dom <sup>ω</sup>) <sup>⊂</sup> dom <sup>t</sup>S<sup>1</sup> is semibounded and <sup>t</sup>[f] <sup>≥</sup> <sup>t</sup>S<sup>1</sup> [f] holds for all f ∈ dom t. To verify that t is closed consider a sequence (fn) in dom t such that f<sup>n</sup> →<sup>t</sup> f for some f ∈ H, that is, f<sup>n</sup> → f in H and t[f<sup>n</sup> − fm] → 0. Since the forms tS<sup>1</sup> − a, a<m(S1), and ω are nonnegative, it follows from (5.6.39) and (t − a)[f<sup>n</sup> − fm] → 0 that 0 ≤ (tS<sup>1</sup> − a)[f<sup>n</sup> − fm] → 0 and ω[Λf<sup>n</sup> − Λfm] → 0. As tS<sup>1</sup> is a closed form in H, one concludes that f ∈ dom tS<sup>1</sup> and tS<sup>1</sup> [f<sup>n</sup> −f] → 0. This shows that f<sup>n</sup> converges to f in HtS1−<sup>a</sup> and as Λ is bounded one has Λf<sup>n</sup> → Λf in G. Moreover, since ω[Λf<sup>n</sup> − Λfm] → 0 and ω is closed in G, one concludes that Λ<sup>f</sup> <sup>∈</sup> dom <sup>ω</sup> and <sup>ω</sup>[Λf<sup>n</sup> <sup>−</sup> <sup>Λ</sup>f] <sup>→</sup> 0. Hence, <sup>f</sup> <sup>∈</sup> dom <sup>t</sup> = Λ−1(dom <sup>ω</sup>) and t[f<sup>n</sup> − f] → 0, and t is a closed form in H.

Let H be the semibounded self-adjoint relation associated with t via the first representation theorem; see Theorem 5.1.18. Since dom tS<sup>F</sup> = ker Λ, it follows from (5.6.36) that tS<sup>F</sup> ⊂ t. Hence, tS<sup>1</sup> ≤ t ≤ tS<sup>F</sup> or, equivalently, S<sup>1</sup> ≤ H ≤ SF; see Theorem 5.2.4. One concludes from Theorem 5.4.6 (or its proof) that H is a self-adjoint extension of S. This completes the proof of (ii).

The indicated one-to-one correspondence is clear from (i) and (ii) by the uniqueness of the representing semibounded self-adjoint relation associated with a closed semibounded form. -

A combination of Theorem 5.6.11 with Theorem 5.4.6 leads to the following observations. Recall that the Kre˘ın type extensions SK,x and S<sup>F</sup> are transversal when x<γ = m(S) = m(SF) (see (5.4.26)) and that in the nonnegative case γ ≥ 0 the Kre˘ın–von Neumann extension is given by SK,0; cf. Definition 5.4.2.

**Corollary 5.6.12.** Let S be a closed semibounded relation in H with lower bound γ.


Theorem 5.6.11 is a first step towards a full description of all semibounded self-adjoint extensions and their associated forms. The following result is a continuation of Theorem 5.5.14 for semibounded self-adjoint extensions (see Corollary 5.5.15) and an extension of the first part of Theorem 5.6.11, in which also the boundary conditions of the extensions and the corresponding forms are connected.

**Theorem 5.6.13.** Let S be a closed semibounded relation in H and let S<sup>1</sup> be a semibounded self-adjoint extension of S such that S<sup>1</sup> and S<sup>F</sup> are transversal. Let {G, Γ0, Γ1} be a boundary triplet for S<sup>∗</sup> and let {G,Λ} be a compatible boundary pair for S corresponding to S1. Assume that H<sup>Θ</sup> is a semibounded self-adjoint extension of S corresponding to the self-adjoint relation Θ in G as in (5.5.32)– (5.5.33). Then Θ is semibounded in G and the corresponding closed semibounded form ω<sup>Θ</sup> in G and the closed semibounded form tH<sup>Θ</sup> corresponding to H<sup>Θ</sup> are related by

$$\begin{aligned} \mathfrak{t}\_{H\_{\Theta}}[f,g] &= \mathfrak{t}\_{S\_1}[f,g] + \omega\_{\Theta}[\Lambda f, \Lambda g], \\ \operatorname{dom} \mathfrak{t}\_{H\_{\Theta}} &= \{ f \in \operatorname{dom} \mathfrak{t}\_{S\_1} : \Lambda f \in \operatorname{dom} \omega\_{\Theta} \}. \end{aligned} \tag{5.6.40}$$

Proof. The proof of the theorem will rely on the results in Theorem 5.5.14 and Corollary 5.5.15, where H<sup>Θ</sup> is now taken to be semibounded. In the first two steps of the proof the equality between the forms in (5.6.40) will be verified. In the last step the domain characterization in (5.6.40) will be shown.

Step 1. First recall from Corollary 5.5.15 the formula (5.5.34):

$$\mathbf{t}\left(f',g\right) = \mathbf{t}\_{S\_1}[f,g] + (\Theta\_{\text{op}}\,\Gamma\_0 f, \Gamma\_0 \hat{g}), \quad f, \hat{g} \in H\_{\Theta}.\tag{5.6.41}$$

Since H<sup>Θ</sup> is assumed to be semibounded, it follows from Theorem 5.1.18 that (f- , g) = tH<sup>Θ</sup> [f,g]. As the boundary pair {G,Λ} is compatible with the boundary triplet {G, Γ0, Γ1}, the mapping Λ is an extension of Γ0. Hence, (5.6.41) may now be rewritten as

$$\mathfrak{t}\_{H\Theta}[f,g] = \mathfrak{t}\_{S\_1}[f,g] + (\Theta\_{\text{op}}\Lambda f, \Lambda g), \quad f, g \in \text{dom}\, H\_{\Theta}.\tag{5.6.42}$$

Step 2. In this step it is shown that the formula (5.6.42) can be extended to the form domain of t<sup>H</sup><sup>Θ</sup> as in (5.6.40). First observe that by Lemma 5.6.5 one has A<sup>0</sup> = SF. Moreover, since H<sup>Θ</sup> is a semibounded extension of S, it follows from Proposition 5.5.6 that Θ is semibounded from below. Hence, (5.6.42) can be written as

$$\mathbf{t}\_{H\Theta}[f,g] = \mathbf{t}\_{S1}[f,g] + \omega\_{\Theta}[\Lambda f, \Lambda g], \quad f, g \in \text{dom}\, H\_{\Theta},\tag{5.6.43}$$

where ω<sup>Θ</sup> is the closed semibounded form corresponding to Θ in G. It follows from Corollary 5.3.9 that

$$\text{dom}\,(H\_{\Theta}-a)^{\frac{1}{2}} \subset \text{dom}\,(S\_1 - a)^{\frac{1}{2}}\tag{5.6.44}$$

and hence there is a constant C > 0 such that

$$\|\|( (S\_1)\_{\text{op}} - a)^{\frac{1}{2}} \varphi\|\| \le C \|\|(H\_\Theta)\_{\text{op}} - a)^{\frac{1}{2}} \varphi\|\tag{5.6.45}$$

for all ϕ ∈ dom (H<sup>Θ</sup> − a) 1 2 .

Now let f ∈ dom tH<sup>Θ</sup> . As dom H<sup>Θ</sup> is a core of tH<sup>Θ</sup> , there exists a sequence (fn) in dom H<sup>Θ</sup> such that f<sup>n</sup> → f in H and tH<sup>Θ</sup> [f<sup>n</sup> − f] → 0. By (5.6.44)–(5.6.45) it follows that f ∈ dom tS<sup>1</sup> and tS<sup>1</sup> [f<sup>n</sup> −f] → 0, so that f<sup>n</sup> → f in HtS1−a. Since Λ is bounded, this shows that Λf<sup>n</sup> → Λf in G. Furthermore, from (5.6.43) one sees that

$$
\omega\_{\Theta}[\Lambda f\_n - \Lambda f\_m] = \mathfrak{t}\_{H\Theta}[f\_n - f\_m] - \mathfrak{t}\_{S\_1}[f\_n - f\_m] \to 0.
$$

Since ω<sup>Θ</sup> is closed, one obtains

Λf ∈ dom ω<sup>Θ</sup> and ωΘ[Λf<sup>n</sup> − Λf] → 0.

Therefore, the following inclusion has been shown

$$\operatorname{dom}\mathfrak{t}\_{H\boldsymbol{\Theta}} \subset \left\{ f \in \operatorname{dom}\mathfrak{t}\_{\mathcal{S}\_1} : \Lambda f \in \operatorname{dom}\omega\_{\boldsymbol{\Theta}} \right\}.\tag{5.6.46}$$

Let f,g ∈ dom tH<sup>Θ</sup> and choose (fn),(gn) in dom H<sup>Θ</sup> as above. Then one has tH<sup>Θ</sup> [fn, gn] → tH<sup>Θ</sup> [f,g], tS<sup>1</sup> [fn, gn] → tS<sup>1</sup> [f,g], and ωΘ[Λfn,Λgn] → ωΘ[Λf,Λg] as n → ∞ by Lemma 5.1.8, and hence (5.6.43) extends to

$$\operatorname{tr}\_{\Theta}[f,g] = \operatorname{t}\_{\mathcal{S}\_1}[f,g] + \omega\_{\Theta}[\Lambda f, \Lambda g], \quad f, g \in \operatorname{dom}\mathfrak{t}\_{\mathcal{H}}.\tag{5.6.47}$$

Step 3. To complete the proof of the theorem the equality between the domains in (5.6.40) must be verified. Due to (5.6.46) it suffices to show that

$$\left\{ f \in \text{dom}\,\mathbf{t}\_{S\_1} : \Lambda f \in \text{dom}\,\omega\_{\Theta} \right\} \subset \text{dom}\,\mathbf{t}\_{H\_{\Theta}}.$$

Let f ∈ dom t<sup>S</sup><sup>1</sup> and assume that ϕ = Λf ∈ dom ωΘ. Using the orthogonal decomposition

$$\operatorname{dom}\mathfrak{t}\_{\mathbb{S}\_1} = \left(\operatorname{dom}\mathfrak{t}\_{\mathbb{S}\_1} \ominus\_{\mathbb{t}\_{\mathbb{S}\_1}-a} \operatorname{dom}\mathfrak{t}\_{\mathbb{S}\_\mathbb{P}}\right) \oplus\_{\mathbb{t}\_{\mathbb{S}\_1}-a} \operatorname{dom}\mathfrak{t}\_{\mathbb{S}\_\mathbb{P}}, \quad a < m(S\_1), \quad \text{(5.6.48)}$$
 write  $f$  in the form  $f = h + k$ , where  $h \in \operatorname{dom}\mathfrak{t}\_{\mathbb{S}\_1} \ominus\_{\mathbb{t}\_{\mathbb{S}\_1}-a} \operatorname{dom}\mathfrak{t}\_{\mathbb{S}\_\mathbb{P}}$  and  $k \in \operatorname{dom}\mathfrak{t}\_{\mathbb{S}\_\mathbb{P}}$ . Then  $k \in \operatorname{dom}\mathfrak{t}\_{H\mathbb{S}\_\mathbb{P}}$ , and  $\operatorname{succ}\mathfrak{t} \Lambda = \operatorname{dom}\mathfrak{t}\_{\mathbb{S}\_\mathbb{P}}$ , one has  $\varphi = \Lambda h$ . It remains to show that  $h \in \operatorname{dom}\mathfrak{t}\_{H\mathbb{S}\_\mathbb{P}}$ . 
$$\mathbb{E}\_{\mathbb{P}}\left[\mathfrak{t}\_{\mathbb{S}\_1} \mathfrak{t}\_{\mathbb{S}\_1}\mathfrak{t}\_{\mathbb{S}\_\mathbb{P}}\right]$$
.

Recall that dom Θ is a core of ωΘ. Hence, there exists a sequence (ϕn) in dom Θ such that ϕ<sup>n</sup> →ω<sup>Θ</sup> ϕ, that is,

$$
\varphi\_n \to \varphi \in \mathfrak{G} \qquad \text{and} \qquad \omega\_\Theta[\varphi\_n - \varphi\_m] \to 0.
$$

Note that ϕ<sup>n</sup> ∈ dom Θ means {ϕn, ϕ- <sup>n</sup>} ∈ Θ for some ϕ- <sup>n</sup> ∈ G and there exists {fn, f- <sup>n</sup>} ∈ H<sup>Θ</sup> such that Γ{fn, f- <sup>n</sup>} = {ϕn, ϕ- <sup>n</sup>}. Hence, Λf<sup>n</sup> = Γ0{fn, f- <sup>n</sup>} = ϕn, where f<sup>n</sup> ∈ dom H<sup>Θ</sup> ⊂ dom tH<sup>Θ</sup> ⊂ dom tS<sup>1</sup> . Using (5.6.48), one can write f<sup>n</sup> in the form

$$f\_n = h\_n + k\_n, \quad h\_n \in \text{dom } \mathfrak{t}\_{\mathbb{S}\_1} \ominus\_{\mathfrak{t}\_{\mathbb{S}\_1} - a} \text{dom } \mathfrak{t}\_{\mathbb{S}\_{\mathbb{P}}}, \ k\_n \in \text{dom } \mathfrak{t}\_{\mathbb{S}\_{\mathbb{P}}}.$$

From ker Λ = dom tS<sup>F</sup> it is clear that ϕ<sup>n</sup> = Λf<sup>n</sup> = Λhn. Since the restriction of Λ to dom tS<sup>1</sup> tS1−<sup>a</sup> dom tS<sup>F</sup> has a bounded inverse it follows from ϕ<sup>n</sup> → ϕ in G that h<sup>n</sup> → h in HtS1−a. In particular, h<sup>n</sup> → h in H and tS<sup>1</sup> [h<sup>n</sup> − hm] → 0. Then it follows from (5.6.47) that

$$\begin{aligned} \mathfrak{t}\_{H\Theta}[h\_n - h\_m] &= \mathfrak{t}\_{S\_1}[h\_n - h\_m] + \omega\_{\Theta}[\Lambda h\_n - \Lambda h\_m] \\ &= \mathfrak{t}\_{S\_1}[h\_n - h\_m] + \omega\_{\Theta}[\varphi\_n - \varphi\_m] \to 0, \end{aligned}$$

and as <sup>t</sup>H<sup>Θ</sup> is closed, one concludes that <sup>h</sup> <sup>∈</sup> dom <sup>t</sup>H<sup>Θ</sup> . -

One may apply the second representation theorem (Theorem 5.1.23) to the closed form ω<sup>Θ</sup> in Theorem 5.6.13. Assume that μ ≤ m(Θ), then it follows that

$$\begin{aligned} \omega\_{\Theta}[\Lambda f, \Lambda g] &= \left( (\Theta\_{\text{op}} - \mu)^{\frac{1}{2}} \Lambda f, (\Theta\_{\text{op}} - \mu)^{\frac{1}{2}} \Lambda g \right) + \mu \left( \Lambda f, \Lambda g \right), \\ \text{dom}\,\omega\_{\Theta} &= \text{dom}\,(\Theta\_{\text{op}} - \mu)^{\frac{1}{2}}. \end{aligned}$$

Hence, one obtains the following result; cf. Corollary 5.5.15.

**Corollary 5.6.14.** Let the assumptions be as in Theorem 5.6.13 and let μ ≤ m(Θ). Then the closed semibounded form tH<sup>Θ</sup> corresponding to H<sup>Θ</sup> is given by

$$\begin{aligned} \mathfrak{t}\_{H\Theta}[f,g] &= \mathfrak{t}\_{\mathcal{S}1}[f,g] + \left( (\Theta\_{\mathrm{op}} - \mu)^{\frac{1}{2}} \Lambda f, (\Theta\_{\mathrm{op}} - \mu)^{\frac{1}{2}} \Lambda g \right) + \mu \left( \Lambda f, \Lambda g \right), \\ \operatorname{dom} \mathfrak{t}\_{H\Theta} &= \left\{ f \in \operatorname{dom} \mathfrak{t}\_{\mathcal{S}1} : \Lambda f \in \operatorname{dom} \left( \Theta\_{\mathrm{op}} - \mu \right)^{\frac{1}{2}} \right\}. \end{aligned}$$

Furthermore, if Θop ∈ **B**(Gop), then

$$\begin{aligned} \mathfrak{t}\_{H\Theta}[f,g] &= \mathfrak{t}\_{S\_1}[f,g] + \left(\Theta\_{\text{op}} \,\Lambda f, \Lambda g\right), \\ \text{dom}\,\mathfrak{t}\_{H\Theta} &= \left\{ f \in \text{dom}\,\mathfrak{t}\_{S\_1} : \Lambda f \in \text{dom}\,\Theta\_{\text{op}} \right\}, \end{aligned} \tag{5.6.49}$$

and in the special case Θ ∈ **B**(G)

$$\mathfrak{t}\_{H\_{\Theta}}[f,g] = \mathfrak{t}\_{\mathbb{S}\_1}[f,g] + \left(\Theta \Lambda f, \Lambda g\right), \qquad \text{dom}\,\mathfrak{t}\_{H\_{\Theta}} = \text{dom}\,\mathfrak{t}\_{\mathbb{S}\_1}.$$

$$\square$$

**Example 5.6.15.** Let S be a closed semibounded relation in H with lower bound γ, fix x<γ, and consider the boundary triplet {Nx(S∗), Γ0, Γ1} for S<sup>∗</sup> in Corollary 5.5.12 and the corresponding compatible boundary pair {Nx(S∗),Λ} in Example 5.6.9. Assume that H<sup>Θ</sup> is a semibounded self-adjoint extension of S corresponding to the self-adjoint relation Θ in Nx(S∗) as in (5.5.32)–(5.5.33). Then Θ is semibounded in Nx(S∗) and the corresponding closed semibounded form ω<sup>Θ</sup> in Nx(S∗) and the closed semibounded form t<sup>H</sup><sup>Θ</sup> corresponding to H<sup>Θ</sup> are related by

$$\begin{aligned} \mathfrak{tt}\_{H\_{\Theta}}[f,g] &= \mathfrak{t}\_{S\mathfrak{K},x}[f,g] + \omega\_{\Theta}[f\_x,g\_x],\\ \operatorname{dom}\mathfrak{t}\_{H\_{\Theta}} &= \left\{ f = f\_{\mathcal{F}} + f\_x \in \operatorname{dom}\mathfrak{t}\_{S\_{\mathcal{F}}} + \mathfrak{N}\_x(S^\*) : f\_x \in \operatorname{dom}\omega\_{\Theta} \right\}. \end{aligned}$$

Let H<sup>Θ</sup> be a semibounded self-adjoint extension of S corresponding to the self-adjoint relation Θ in G as in (5.5.32)–(5.5.33). The first boundary condition in (5.5.33) is the essential boundary condition given by Γ0f <sup>∈</sup> dom Θop . Since <sup>H</sup><sup>Θ</sup> is now assumed to be semibounded, it follows from f ∈ dom S<sup>∗</sup> ⊂ dom Λ that this condition can be written as

$$
\Lambda f = \Gamma\_0 \widehat{f} \in \text{dom}\,\Theta\_{\text{op}} \subset \text{dom}\,(\Theta\_{\text{op}} - \mu)^{\frac{1}{2}}, \quad \mu \le m(\Theta),
$$

which implies that f ∈ dom tH<sup>Θ</sup> . The second boundary condition in (5.5.33) is the natural boundary condition given by Pop Γ1f = Θop <sup>Γ</sup>0<sup>f</sup> . It is subsumed in the additive term in the structure of the form tH<sup>Θ</sup> :

$$\left( (\Theta\_{\mathrm{op}} - \mu)^{\frac{1}{2}} \Lambda f, (\Theta\_{\mathrm{op}} - \mu)^{\frac{1}{2}} \Lambda g \right) + \mu \left( \Lambda f, \Lambda g \right);$$

cf. Corollary 5.5.15, which in case of a bounded operator part Θop simplifies to

$$\left(\Theta\_{\mathrm{op}}\,\Lambda f,\Lambda g\right);$$

cf. (5.6.49). In particular, the elements in dom H<sup>Θ</sup> satisfy an essential boundary condition if and only if dom Θ = G, that is, Θ ∈ **B**(G). Note that the extreme case dom Θ = {0} corresponds to Λf = 0 and Γ0f = 0, i.e., <sup>f</sup> <sup>∈</sup> dom <sup>t</sup>S<sup>F</sup> and <sup>f</sup> <sup>∈</sup> <sup>S</sup>F.

**Remark 5.6.16.** In Theorem 5.6.11 a one-to-one correspondence between the closed nonnegative forms ω in G and the semibounded self-adjoint extensions H of S in H satisfying S<sup>1</sup> ≤ H ≤ S<sup>F</sup> is established. For closed semibounded forms ω in G the situation is different: Although Theorem 5.6.13 shows that for each semibounded self-adjoint extension H = H<sup>Θ</sup> of S there exists a closed semibounded form ω = ω<sup>Θ</sup> in G such that

$$\mathbf{t}\_H[f,g] = \mathbf{t}\_{S\_1}[f,g] + \omega[\Lambda f, \Lambda g],\tag{5.6.50}$$

one can also see that for an arbitrary closed semibounded form ω in G the righthand side in (5.6.50) is not necessarily bounded from below. However, if, e.g., ω is a symmetric form with dom ω = G such that for some a ≥ 0 and b ∈ [0, 1)

$$|\omega[\Lambda f]| \le a \|f\|^2 + b \mathbf{t}\_{S\_1}[f], \qquad f \in \text{dom } \mathbf{t}\_{S\_1},$$

then Theorem 5.1.16 shows that t<sup>H</sup> in (5.6.50) is a closed semibounded form with dom t<sup>H</sup> = dom t<sup>S</sup><sup>1</sup> in H. In particular, in this situation the corresponding selfadjoint extension H of S is semibounded.

Recall from Proposition 5.5.8 that in the case of finite defect numbers or in the case that S<sup>F</sup> has a compact resolvent the implication

$$\Theta \text{ semibounded in } \mathfrak{G} \quad \Rightarrow \quad H\_{\Theta} \text{ semibounded in } \mathfrak{H}$$

holds. The following corollary supplements Theorem 5.6.13 and can be seen as an extension and completion of the second part of Theorem 5.6.11. When the defect numbers are not finite or the resolvent of S<sup>F</sup> is not compact there is in general no analog of the second part of Theorem 5.6.11.

**Corollary 5.6.17.** Let S be a closed semibounded relation in H and let S<sup>1</sup> be a semibounded self-adjoint extension of S such that S<sup>1</sup> and S<sup>F</sup> are transversal. Let {G, Γ0, Γ1} be a boundary triplet for S∗, let {G,Λ} be a compatible boundary pair for S corresponding to S<sup>1</sup> and assume, in addition, that one of the following conditions hold:


Let Θ be a semibounded self-adjoint relation in G and let H<sup>Θ</sup> be the corresponding self-adjoint extension of S as in (5.5.32)–(5.5.33). Then H<sup>Θ</sup> is semibounded and the closed semibounded forms tH<sup>Θ</sup> and ω<sup>Θ</sup> are related by (5.6.40).

The following corollary complements Corollary 5.6.12 (iii). If the symmetric relation S is positive a natural choice for S<sup>1</sup> is the Kre˘ın–von Neumann extension SK,0. A possible explicit choice for the boundary triplet can be found in Example 5.5.13.

**Corollary 5.6.18.** Let S be a closed semibounded relation in H with lower bound γ > 0, let {G, Γ0, Γ1} be a boundary triplet for S∗, and let {G,Λ} be a compatible boundary pair for S corresponding to the Kre˘ın–von Neumann extension SK,0. Then the formula

$$\begin{split} \mathfrak{t}\_{H\Theta}[f,g] &= \mathfrak{t}\_{\mathrm{S\!K},0}[f,g] + \left(\Theta\_{\mathrm{op}}^{\frac{1}{2}}\Lambda f, \Theta\_{\mathrm{op}}^{\frac{1}{2}}\Lambda g\right), \\ \mathrm{dom}\,\mathfrak{t}\_{H\Theta} &= \left\{ f \in \mathrm{dom}\,\mathfrak{t}\_{\mathrm{S\!K},0} : \Lambda f \in \mathrm{dom}\,\Theta\_{\mathrm{op}}^{\frac{1}{2}}\right\}, \end{split} \tag{5.6.51}$$

establishes a one-to-one correspondence between all closed nonnegative forms tH<sup>Θ</sup> corresponding to nonnegative self-adjoint extensions H<sup>Θ</sup> of S in H and all closed nonnegative forms ω<sup>Θ</sup> corresponding to nonnegative self-adjoint relations Θ in G.

Proof. By assumption, one has SK,<sup>0</sup> = ker Γ<sup>1</sup> and hence the Weyl function M corresponding to {G, Γ0, Γ1} satisfies M(0) = 0 by Corollary 5.5.2 (viii). Assume that H<sup>Θ</sup> is a nonnegative self-adjoint extension of S with corresponding closed nonnegative form t<sup>H</sup><sup>Θ</sup> . Since γ > 0, Proposition 5.5.6 with x = 0 shows that the self-adjoint relation Θ in G is nonnegative. Formula (5.6.51) follows from Theorem 5.6.13 and Corollary 5.6.14 with μ = 0. Conversely, if Θ is a nonnegative self-adjoint relation in G, then Theorem 5.6.11 (ii) shows that H<sup>Θ</sup> is a nonnegative self-adjoint extension of S and (5.6.51) holds. -

In the next corollary the ordering of semibounded self-adjoint extensions is translated in the ordering of the corresponding parameters.

**Corollary 5.6.19.** Let S be a closed semibounded relation in H and let {G, Γ0, Γ1} be a boundary triplet for S∗. Assume that

$$
\ker \Gamma\_0 = S\_\mathcal{F} \quad \text{and} \quad \ker \Gamma\_1 = S\_1,
$$

where S<sup>F</sup> is the Friedrichs extension and S<sup>1</sup> is a semibounded self-adjoint extension of S. Let HΘ<sup>1</sup> and HΘ<sup>2</sup> be semibounded self-adjoint extensions of S corresponding to the semibounded self-adjoint relations Θ<sup>1</sup> and Θ2. Then

$$H\_{\Theta\_1} \le H\_{\Theta\_2} \quad \Leftrightarrow \quad \Theta\_1 \le \Theta\_2. \tag{5.6.52}$$

In particular, S<sup>1</sup> ≤ HΘ<sup>2</sup> ⇔ 0 ≤ Θ2.

Proof. Let {G,Λ} be a compatible boundary pair for S corresponding to S<sup>1</sup> as in Theorem 5.6.6. Then according to Theorem 5.6.13 one has the following identities

$$\begin{aligned} \mathfrak{t}\_{H\_{\Theta\_1}}[f,g] &= \mathfrak{t}\_{S\_1}[f,g] + \omega\_{\Theta\_1}[\Lambda f, \Lambda g], \\ \operatorname{dom} \mathfrak{t}\_{H\_{\Theta\_1}} &= \left\{ f \in \operatorname{dom} \mathfrak{t}\_{S\_1} : \Lambda f \in \operatorname{dom} \omega\_{\Theta\_1} \right\}, \end{aligned} \tag{5.6.53}$$

and

$$\begin{aligned} \mathfrak{t}\_{H\_{\Theta\_2}}[f,g] &= \mathfrak{t}\_{S\_1}[f,g] + \omega\_{\Theta\_2}[\Lambda f, \Lambda g], \\ \operatorname{dom} \mathfrak{t}\_{H\_{\Theta\_2}} &= \left\{ f \in \operatorname{dom} \mathfrak{t}\_{S\_1} : \Lambda f \in \operatorname{dom} \omega\_{\Theta\_2} \right\}. \end{aligned} \tag{5.6.54}$$

Recall from Theorem 5.2.4 that HΘ<sup>1</sup> ≤ HΘ<sup>2</sup> if and only if tHΘ1 ≤ tHΘ2 . This last statement means by definition that

$$\text{dom}\,\mathbf{t}\_{H\boldsymbol{\theta}\_{2}} \subset \text{dom}\,\mathbf{t}\_{H\boldsymbol{\theta}\_{1}} \quad \text{and} \quad \mathbf{t}\_{H\boldsymbol{\theta}\_{1}}[f] \le \mathbf{t}\_{H\boldsymbol{\theta}\_{2}}[f], \qquad f \in \text{dom}\,\mathbf{t}\_{H\boldsymbol{\theta}\_{2}}, \tag{5.6.55}$$

which, via (5.6.53) and (5.6.54), is equivalent to

$$
\omega \text{ dom } \mathbf{t}\_{H\boldsymbol{\theta}\_{2}} \subset \text{dom } \mathbf{t}\_{H\boldsymbol{\theta}\_{1}} \quad \text{and} \quad \omega\_{\boldsymbol{\Theta}\_{1}}[\boldsymbol{\Lambda}f] \le \omega\_{\boldsymbol{\Theta}\_{2}}[\boldsymbol{\Lambda}f], \qquad f \in \text{dom } \mathbf{t}\_{H\boldsymbol{\theta}\_{2}}.\tag{5.6.56}
$$

Assume now that HΘ<sup>1</sup> ≤ HΘ<sup>2</sup> , i.e., that (5.6.56) (and (5.6.55)) holds. First it will be shown that dom tHΘ2 ⊂ dom tHΘ1 implies that

$$
\operatorname{dom}\Theta\_2 \subset \operatorname{dom}\omega\_{\Theta\_1}.\tag{5.6.57}
$$

To see this, let ϕ ∈ dom Θ2. Then {ϕ, ϕ- } ∈ Θ<sup>2</sup> for some ϕ- ∈ G. Now choose {f,f- } ∈ H<sup>Θ</sup><sup>2</sup> ⊂ S<sup>∗</sup> with the property Γ{f,f- } = {ϕ, ϕ- }. Then it follows that Λf = Γ0{f,f- } = ϕ. Furthermore, since f ∈ dom S<sup>∗</sup> ⊂ dom t<sup>S</sup><sup>1</sup> and

$$
\varphi = \Lambda f \in \text{dom}\,\Theta\_2 \subset \text{dom}\,\omega\_{\Theta\_2},
$$

it follows from dom t<sup>H</sup>Θ2 ⊂ dom t<sup>H</sup>Θ1 that ϕ = Λf ∈ dom ω<sup>Θ</sup><sup>1</sup> . Hence, (5.6.57) has been shown. Next observe that due to the previous reasoning the inequality in (5.6.56) gives

$$
\omega\_{\Theta\_1}[\varphi] \le \omega\_{\Theta\_2}[\varphi], \quad \varphi \in \text{dom}\,\Theta\_2. \tag{5.6.58}
$$

Denote the restriction of the form ωΘ<sup>2</sup> to dom Θ<sup>2</sup> by ˚ωΘ<sup>2</sup> . Then the inclusion (5.6.57) and the inequality (5.6.58) can be written as

$$
\omega\_{\Theta\_1} \le \mathring{\omega}\_{\Theta\_2},\tag{5.6.59}
$$

and, since dom Θ<sup>2</sup> is a core of ωΘ<sup>2</sup> , it follows from (5.6.59) and Lemma 5.2.2 (v) that

ωΘ<sup>1</sup> ≤ ωΘ<sup>2</sup> or, equivalently, Θ<sup>1</sup> ≤ Θ2.

Hence, HΘ<sup>1</sup> ≤ HΘ<sup>2</sup> implies that Θ<sup>1</sup> ≤ Θ2.

For the converse statement assume Θ<sup>1</sup> ≤ Θ<sup>2</sup> or, equivalently, ωΘ<sup>1</sup> ≤ ωΘ<sup>2</sup> , i.e.,

dom ωΘ<sup>2</sup> ⊂ dom ωΘ<sup>1</sup> and ωΘ<sup>1</sup> [ϕ] ≤ ωΘ<sup>2</sup> [ϕ], ϕ ∈ dom ωΘ<sup>2</sup> . (5.6.60)

It will be shown that (5.6.56) holds. Let f ∈ dom tHΘ2 , so that f ∈ dom tS<sup>1</sup> and Λf ∈ dom ωΘ<sup>2</sup> . Then it follows from (5.6.60) that also Λf ∈ dom ωΘ<sup>1</sup> . Hence, one sees that dom tHΘ2 ⊂ dom tHΘ1 . Furthermore, if f ∈ dom tHΘ2 , then it follows directly from (5.6.60) that ωΘ<sup>1</sup> [Λf] ≤ ωΘ<sup>2</sup> [Λf]. Thus, (5.6.56) holds and one concludes that H<sup>1</sup> ≤ H2.

Finally, note that for the choice Θ<sup>1</sup> = 0 one has HΘ<sup>1</sup> = ker Γ<sup>1</sup> = S<sup>1</sup> and hence the equivalence (5.6.52) takes the form <sup>S</sup><sup>1</sup> <sup>≤</sup> <sup>H</sup>Θ<sup>2</sup> <sup>⇔</sup> <sup>0</sup> <sup>≤</sup> <sup>Θ</sup>2. -

If S is a semibounded relation in H with lower bound γ and one chooses HΘ<sup>1</sup> to be the Kre˘ın type extension SK,x for some x ≤ γ in the previous corollary, then the next statement follows from (5.5.1).

**Corollary 5.6.20.** Let the assumptions be as in Corollary 5.6.19 and let H<sup>Θ</sup> be a semibounded self-adjoint extension of S corresponding to the self-adjoint relation Θ in G as in (5.5.32)–(5.5.33). Then for any x ≤ m(S)

$$S\_{\mathcal{K},x} \le H\_{\Theta} \quad \Leftrightarrow \quad M(x) \le \Theta.$$

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 6 Sturm–Liouville Operators**

Second-order Sturm–Liouville differential expressions generate self-adjoint differential operators in weighted L2-spaces on an interval (a, b). A brief exposition of elementary properties of Sturm–Liouville expressions can be found in Section 6.1; this includes the limit-circle and limit-point terminology. The corresponding maximal and minimal operators associated with the Sturm–Liouville expression are introduced in Section 6.2; here one can find also a discussion of quasi-derivatives which are useful in limit-circle cases. In this chapter the boundary triplets and Weyl functions which can be associated with Sturm–Liouville operators will be studied. The case where the endpoints of (a, b) are regular or limit-circle is treated in Section 6.3, while the case where a is regular or limit-circle and b is limit-point is treated in Section 6.4. In each of these sections the spectrum is related to the limit properties of the Weyl function, as in Chapter 3. The case where both endpoints are limit-point can be found in Section 6.5. Here the useful technique of interface conditions is explained by means of the coupling concept in Section 4.6. Closely related is the case of exit space extensions resulting in boundary conditions depending on the eigenvalue parameter; such extensions are treated in Section 6.6. The characterization of the spectrum via subordinate solutions can be found in Section 6.7, where again the results in Chapter 3 play a central role. The rest of this chapter is devoted to boundary triplets and Weyl functions for Sturm–Liouville operators which are semibounded. Particular attention is paid to the corresponding semibounded forms and boundary pairs; cf. Section 5.6. The special case of regular endpoints is given in Section 6.8. In the singular case it is possible to construct semibounded closed forms by means of solutions which are nonoscillatory near the endpoints. This construction, which is suggested by a particular form of the Green formula, can be found in Section 6.9. Section 6.10 contains an overview of the necessary properties of the so-called nonprincipal and principal solutions which make these forms useful. The case where both endpoints a and b are limit-circle is treated in Section 6.11, while the case where a is limitcircle and b is limit-point is treated in Section 6.12. In each section the connection between the boundary triplet and the form is studied in detail as in Section 5.6. Finally, in Section 6.13 the particular case <sup>L</sup> <sup>=</sup> <sup>−</sup>D<sup>2</sup> <sup>+</sup> <sup>q</sup>, where <sup>q</sup> is a real integrable potential on a half-line is treated. Here again the spectral theory from Chapter 3 will be employed.

## **6.1 Sturm–Liouville differential expressions**

This section offers a brief review of the properties of the Sturm–Liouville differential expression L defined by

$$L = \frac{1}{r} \left[ -DpD + q \right], \quad D = d/dx,\tag{6.1.1}$$

where p, q, and r are assumed to be real functions on an open interval (a, b) with −∞ ≤ a<b ≤ ∞. Throughout the text the following minimal conditions will be imposed:

$$\begin{cases} p(x) \neq 0, \ r(x) > 0, & \text{for almost all } x \in (a, b), \\ 1/p, q, r \in L\_{\text{loc}}^1(a, b). \end{cases} \tag{6.1.2}$$

Here L<sup>1</sup> loc (a, b) stands for the linear space of all (equivalence classes of) complex functions which are integrable on each compact subset K ⊂ (a, b).

For the reader's convenience some more notations which are used in the following are collected. The space of complex integrable functions on (a, b) will be denoted by L1(a, b). One denotes by L<sup>2</sup> r,loc (a, b) the set of all functions f for which |f| <sup>2</sup><sup>r</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc(a, b), while L<sup>2</sup> <sup>r</sup>(a, b) stands for the set of all functions f for which |f| <sup>2</sup><sup>r</sup> <sup>∈</sup> <sup>L</sup>1(a, b). The space <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) is a Hilbert space when equipped with the usual inner product

$$(f,g)\_{L^2\_r(a,b)} := \int\_a^b f(x)\overline{g(x)}r(x) \,dx.$$

For <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r,loc (a, b) the generic notations

$$f \in L\_r^2(a, a') \quad \text{or} \quad f \in L\_r^2(b', b)$$

indicate that |f| <sup>2</sup>r is integrable on an interval (a, a- ) for some, and hence for all a<a- < b or on an interval (b- , b) for some, and hence for all a<b- < b, respectively. A complex function f is absolutely continuous on (a, b) if there exists <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (a, b) such that

$$f(x) - f(y) = \int\_{y}^{x} g(t) \, dt \tag{6.1.3}$$

for all x, y ∈ (a, b). One denotes by AC(a, b) the linear space of absolutely continuous functions on (a, b). Note that if f ∈ AC(a, b), then f is differentiable almost everywhere on (a, b) and f- = g almost everywhere, where g is as in (6.1.3). When <sup>a</sup> <sup>∈</sup> <sup>R</sup>, then AC[a, b) stands for the subclass of <sup>f</sup> <sup>∈</sup> AC(a, b) for which <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (a, b) in (6.1.3) additionally belongs to L<sup>1</sup>(a, a- ) for some, and hence for all a<a-< b, in which case

$$f(x) - f(a) = \int\_{a}^{x} g(t) \, dt.$$

for all <sup>x</sup> <sup>∈</sup> (a, b) and thus <sup>f</sup>(a) = lim<sup>x</sup>→<sup>a</sup> <sup>f</sup>(x). When <sup>b</sup> <sup>∈</sup> <sup>R</sup> there is a similar notation AC(a, b] and for f ∈ AC(a, b] one has f(b) = lim<sup>x</sup>→<sup>b</sup> f(x). The notation AC[a, b] is analogous. If for some a<c<b there are functions f<sup>l</sup> ∈ AC(a, c] and <sup>f</sup><sup>r</sup> <sup>∈</sup> AC[c, b) with the property <sup>f</sup>l(c) = <sup>f</sup>r(c), then the function <sup>f</sup> : (a, b) <sup>→</sup> <sup>C</sup> defined by

$$f(x) = \begin{cases} f\_{\mathbf{l}}(x), & a < x \le c, \\ f\_{\mathbf{r}}(x), & c < x < b, \end{cases}$$

belongs to the space AC(a, b), as follows easily from the above observations.

In order to apply the differential expression L in (6.1.1) to a complex function f on (a, b) in a meaningful way one must first assume that f ∈ AC(a, b), so that as a consequence f is differentiable almost everywhere. However, then the function pf is only defined almost everywhere. The natural domain of the differential expression L in (6.1.1) is the linear space of all f ∈ AC(a, b) for which the equivalence class [pf- ] (in the sense of Lebesgue measure) contains an absolutely continuous function, which again will be denoted by pf- . For such functions f one defines Lf by

$$(Lf)(x) = \frac{1}{r(x)} \left[ -(pf')'(x) + q(x)f(x) \right], \quad x \in (a, b),$$

so that (Lf)(x) is well defined almost everywhere. This convention will be used tacitly: for f ∈ AC(a, b) the assertion pf- ∈ AC(a, b) means that the equivalence class [pf- ] contains an absolutely continuous function, which is denoted by pf- . Sufficient conditions for the equivalence class [pf- ] to have a representative in AC(a, b) can be found in Theorem 6.1.2.

The following simple observation will be used frequently: for all functions f ∈ AC(a, b) with pf-∈ AC(a, b) one has

$$L\overline{f} = \overline{Lf} \tag{6.1.4}$$

since the coefficient functions of L are assumed to be real.

Often the coefficient functions are integrable in a neighborhood of an endpoint. Hence, the following definition is presented.

**Definition 6.1.1.** Let the coefficient functions p, q, and r of the differential expression L in (6.1.1) satisfy the conditions (6.1.2). Then L is said to be regular


The endpoint a or b is said to be regular if L is regular there and singular if a or b is not a regular endpoint, respectively.

Under the conditions in (6.1.2) there is an existence and uniqueness result for initial value problems involving the inhomogeneous equation (L − λ)f = g when the initial values are posed at an interior point of (a, b). The initial value problem may also be posed at a finite endpoint when the differential expression L is regular there. The uniqueness and existence result can be proved by writing the Sturm–Liouville equation as a first-order system of differential equations:

$$
\begin{pmatrix} f \\ pf' \end{pmatrix}' = \begin{pmatrix} 0 & 1/p \\ q - \lambda r & 0 \end{pmatrix} \begin{pmatrix} f \\ pf' \end{pmatrix} - \begin{pmatrix} 0 \\ rg \end{pmatrix} \cdot \mathbf{I}
$$

The new initial value problem is equivalent to a Volterra integral equation which can be solved in the usual way by successive approximations when all data are locally integrable; see, e.g., [754, Theorem 2.1]. Note also that the assumption <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r,loc (a, b) in the next theorem implies that rg <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (a, b); this follows by means of the Cauchy–Schwarz inequality from the condition <sup>r</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (a, b).

**Theorem 6.1.2.** Let <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r,loc (a, b) and <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>C</sup>. Then for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and x<sup>0</sup> ∈ (a, b) the initial value problem

$$(L - \lambda)f = g, \qquad f(x\_0) = c\_1, \quad (pf')(x\_0) = c\_2,\tag{6.1.5}$$

has a unique solution f ∈ AC(a, b) for which pf- ∈ AC(a, b). Moreover, if a or b is regular, then x<sup>0</sup> = a or x<sup>0</sup> = b is allowed and f, pf belong to AC[a, b) or AC(a, b], respectively. In addition, the functions

$$
\lambda \mapsto f(x\_0, \lambda) \quad \text{and} \quad \lambda \mapsto (pf')(x\_0, \lambda).
$$

are entire for each x<sup>0</sup> ∈ (a, b) and for x<sup>0</sup> = a or x<sup>0</sup> = b if the endpoint a or b is regular, respectively.

Let f,g be complex functions in AC(a, b). Then the Wronskian determinant W(f,g) is defined by

$$W(f,g) = p(fg'-f'g) = f(pg') - (pf')g.\tag{6.1.6}$$

The value at x ∈ (a, b) of W(f,g) will be denoted by Wx(f,g). For complex functions f, g, h, k in AC(a, b) one has the so-called Pl¨ucker identity

$$W(f,g)W(h,k) = W(f,h)W(g,k) - W(f,k)W(g,h). \tag{6.1.7}$$

This identity can be easily verified by writing out the various terms according to (6.1.6). The Wronskian determinant and the Pl¨ucker identity will be applied in conjunction with the differential expression L. Assume, in addition, that pf- , pg- ∈ AC(a, b) (in the sense that their equivalence classes contain an element in AC(a, b)). Then differentiation of the Wronskian gives an identity involving the differential expression L:

$$(W(f,g))' = r\left[\left(Lf\right)g - f\left(Lg\right)\right].\tag{6.1.8}$$

When <sup>g</sup> in (6.1.8) is a solution of (<sup>L</sup> <sup>−</sup> <sup>μ</sup>)<sup>y</sup> = 0 for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> one obtains the useful formula

$$((W(f,g))' = r\left((L-\mu)f\right)g.\tag{6.1.9}$$

Furthermore, if for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup> the function <sup>f</sup> is a solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0, then (6.1.9) gives

$$(W(f,g))' = r\left(\lambda - \mu\right)f\,g.\tag{6.1.10}$$

In particular, one sees that the Wronskian x → Wx(f,g) is constant for solutions f,g of (L − λ)y = 0. Moreover, if the functions f,g are solutions of (L − λ)y = 0, then it is straightforward to verify that these functions are linearly independent if and only if W(f,g) = 0.

Integration by parts in (6.1.8) over a compact subinterval [α, β] ⊂ (a, b) leads to the Green or Lagrange identity:

$$\int\_{\alpha}^{\beta} \left[ (Lf)(x) \overline{g(x)} - f(x) \overline{(Lg)(x)} \right] r(x) \, dx = W\_x(f, \overline{g})|\_{\alpha}^{\beta}, \tag{6.1.11}$$

assuming that f, pf- , g, pg- <sup>∈</sup> AC(a, b) and Lf, Lg <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β). In particular, when <sup>f</sup> <sup>=</sup> <sup>g</sup> are solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup> this gives

$$\left(\lambda - \overline{\lambda}\right) \int\_{\alpha}^{\beta} |f(x)|^2 \, r(x) \, dx = W\_x(f, \overline{f})|\_{\alpha}^{\beta},\tag{6.1.12}$$

see also (6.1.10).

The interest in the present chapter is in solutions of (L − λ)y = 0 which are square-integrable with respect to the weight r near the endpoint a or the endpoint <sup>b</sup>. Thus, if <sup>f</sup> is a solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 on (a, b) for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, then for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the identity (6.1.12) implies that

$$f \in L\_r^2(a, a') \quad \Leftrightarrow \quad \lim\_{x \to a} W\_x(f, \overline{f}) \quad \text{exists},$$

$$f \in L\_r^2(b', b) \quad \Leftrightarrow \quad \lim\_{x \to b} W\_x(f, \overline{f}) \quad \text{exists}.$$

Clearly, if these statements hold for some a<a- < b or a<b- < b, then they hold for all a<a- < b or a<b-< b, respectively.

It follows that under the circumstances of Theorem 6.1.2 for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> the homogeneous equation has a fundamental system of solutions, i.e., for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> there are two solutions u1(·, λ) and u2(·, λ) of (L − λ)y = 0 which are linearly independent. This can be seen by imposing the conditions

$$
\begin{pmatrix} u\_1(x\_0, \lambda) & u\_2(x\_0, \lambda) \\ (pu\_1')(x\_0, \lambda) & (pu\_2')(x\_0, \lambda) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}
$$

for a fixed x<sup>0</sup> ∈ (a, b); when L is regular at the endpoint a or b, then x<sup>0</sup> = a or x<sup>0</sup> = b, respectively, is also allowed. Note that then the Wronskian determinant satisfies <sup>W</sup>x(u1(·, λ), u2(·, λ)) = 1 for all <sup>x</sup> <sup>∈</sup> (a, b). Hence, for <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r,loc (a, b) it is clear that the function

$$h(x) = u\_1(x, \lambda) \int\_{x\_0}^x u\_2(t, \lambda) g(t) r(t) \, dt - u\_2(x, \lambda) \int\_{x\_0}^x u\_1(t, \lambda) g(t) r(t) \, dt$$

belongs to AC(a, b), while (ph- )(x) is equal almost everywhere to

$$(p u\_1')(x, \lambda) \int\_{x\_0}^x u\_2(t, \lambda) g(t) r(t) \, dt - (p u\_2')(x, \lambda) \int\_{x\_0}^x u\_1(t, \lambda) g(t) r(t) \, dt,$$

and the last expression belongs to AC(a, b). Hence, h provides a solution of the inhomogeneous equation (L−λ)h = g with h(x0) = 0 and (ph- )(x0) = 0. Moreover, by adding <sup>c</sup>1u1(·, λ)+c2u2(·, λ), <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>C</sup>, to the solution <sup>h</sup> one obtains a function

$$f = h + c\_1 u\_1(\cdot, \lambda) + c\_2 u\_2(\cdot, \lambda) \tag{6.1.13}$$

which is a solution of the initial value problem (6.1.5). The formula (6.1.13) is sometimes referred to as the variation of constants formula.

The following result is about smoothly cutting off the solution of an inhomogeneous Sturm–Liouville equation near an endpoint of the interval (a, b), so that it becomes trivial in a neighborhood of that endpoint.

**Proposition 6.1.3.** Let <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r,loc (a, b), <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, and let <sup>f</sup> be a solution of the inhomogeneous equation

(L − λ)f = g,

with f, pf- ∈ AC(a, b). Let [α, β] ⊂ (a, b) be a compact subinterval. Then there exist functions f<sup>a</sup> and g<sup>a</sup> with

$$f\_a, pf\_a' \in AC(a, b) \quad \text{and} \quad g\_a \in L^2\_{r, \text{loc}}(a, b),$$

such that

$$(L - \lambda)f\_a = g\_a,$$

and, in addition,

$$f\_a(t) = \begin{cases} f(t), & t \in (a, \alpha], \\ 0, & t \in [\beta, b), \end{cases} \quad g\_a(t) = \begin{cases} g(t), & t \in (a, \alpha], \\ 0, & t \in [\beta, b). \end{cases} \tag{6.1.14}$$

Likewise, there exist functions f<sup>b</sup> and g<sup>b</sup> with

$$f\_b, pf\_b' \in AC(a, b) \quad \text{and} \quad g\_b \in L^2\_{r, \text{loc}}(a, b),$$

such that

$$(L - \lambda)f\_b = g\_b,$$

and, in addition,

$$f\_b(t) = \begin{cases} 0, & t \in (a, \alpha], \\ f(t), & t \in [\beta, b), \end{cases} \quad g\_b(t) = \begin{cases} 0, & t \in (a, \alpha], \\ g(t), & t \in [\beta, b). \end{cases} \tag{6.1.15}$$

Proof. The cut-off process at the endpoint a as exhibited in (6.1.15) will be shown; the other case of cutting off at the endpoint b as in (6.1.14) is treated in a similar way. Define the functions f<sup>b</sup> and g<sup>b</sup> as indicated on the interval (a, α) and on the interval (β, b). On the interval [α, β] choose any function <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β) and define the function f<sup>b</sup> on (α, β) by

$$f\_b(x) = u\_1(x, \lambda) \int\_{\alpha}^{x} u\_2(t, \lambda) h(t) r(t) \, dt - u\_2(x, \lambda) \int\_{\alpha}^{x} u\_1(t, \lambda) h(t) r(t) \, dt,$$

where u1(·, λ) and u2(·, λ) form a fundamental system of (L − λ)y = 0, fixed by standard initial conditions at α:

$$
\begin{pmatrix} u\_1(\alpha, \lambda) & u\_2(\alpha, \lambda) \\ (pu\_1')(\alpha, \lambda) & (pu\_2')(\alpha, \lambda) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix};
$$

cf. Theorem 6.1.2. Then it is clear that on the interval [α, β] the function f<sup>b</sup> satisfies (L − λ)f<sup>b</sup> = h and fb(α) = 0, (pf- <sup>b</sup>)(α) = 0. Furthermore, one sees that

$$
\begin{pmatrix} f\_b(\beta) \\ (pf\_b')(\beta) \end{pmatrix} = \begin{pmatrix} u\_1(\beta,\lambda) & u\_2(\beta,\lambda) \\ (pu\_1')(\beta,\lambda) & (pu\_2')(\beta,\lambda) \end{pmatrix} \begin{pmatrix} \int\_\alpha^\beta u\_2(t,\lambda)h(t)r(t) \, dt \\ -\int\_\alpha^\beta u\_1(t,\lambda)h(t)r(t) \, dt \end{pmatrix}.$$

Now one may choose the function <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β) in such a way that

$$
\begin{pmatrix} f\_b(\beta) \\ (pf\_b')(\beta) \end{pmatrix} = \begin{pmatrix} f(\beta) \\ (pf')(\beta) \end{pmatrix} \cdot
$$

To see this, note that the mapping from L<sup>2</sup> <sup>r</sup>(α, β) to C<sup>2</sup> defined by

$$h \mapsto \begin{pmatrix} \int\_{\alpha}^{\beta} u\_2(t, \lambda) h(t) r(t) \, dt \\ -\int\_{\alpha}^{\beta} u\_1(t, \lambda) h(t) r(t) \, dt \end{pmatrix}.$$

is surjective: any element in C<sup>2</sup> which is orthogonal to its range is trivial.

Observe that with such a choice of h the above components of f<sup>b</sup> belong to AC(a, α], AC[α, β], and AC[β, b), respectively, and that there are no jumps. There

is a similar statement for pf- <sup>b</sup> and hence one concludes that fb, pf- <sup>b</sup> ∈ AC(a, b). Since <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β), one sees that the choice

$$g\_b(t) = h(t), \quad t \in [\alpha, \beta],$$

implies <sup>g</sup><sup>b</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r,loc(a, b) and (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)f<sup>b</sup> <sup>=</sup> <sup>g</sup>b. -

The following lemma is useful in the proof of Theorem 6.1.5.

**Lemma 6.1.4.** Assume that <sup>r</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (b- , b) is nonnegative almost everywhere and let <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b) be a nonnegative function. If <sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r,loc [b- , b) and there exist nonnegative constants A and B such that

$$|u(x)|^2 \le \varphi(x)^2 \left( A + B \int\_{b'}^x |u(s)|^2 r(s) \, ds \right), \quad b' \le x < b,\tag{6.1.16}$$

then <sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b).

Proof. For B = 0 the statement is clear. In the following it will be assumed that B > 0. Since <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b), one can choose b-<c<b such that

$$2B\left(\int\_c^b \varphi(x)^2 r(x) \, dx\right) < 1.$$

Let <sup>y</sup> <sup>∈</sup> <sup>R</sup> be an arbitrary number with <sup>b</sup>- <c<y<b. Then it is clear from the assumption (6.1.16) that

$$\left|u(x)\right|^2 \le A\varphi(x)^2 + B\varphi(x)^2 \int\_{b'}^y \left|u(s)\right|^2 r(s) \, ds, \quad c \le x \le y.$$

Multiply this inequality by 2r(x) and integrate over the interval [c, y]; then

$$\begin{aligned} 2\int\_c^y |u(x)|^2 r(x) \, dx &\le 2A \int\_c^y \varphi(x)^2 r(x) \, dx \\ &+ 2B \left( \int\_c^y \varphi(x)^2 r(x) \, dx \right) \left( \int\_{b'}^y |u(s)|^2 r(s) \, ds \right) \\ &< \frac{A}{B} + \int\_{b'}^y |u(s)|^2 r(s) \, ds. \end{aligned}$$

It follows that for any c<y<b

$$\int\_{c}^{y} |u(x)|^{2} r(x) \, dx \le \frac{A}{B} + \int\_{b'}^{c} |u(s)|^{2} r(s) \, ds.$$

The monotone convergence theorem implies that <sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(c, b), and hence one concludes <sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b). -

The next two theorems present fundamental results proved in a purely analytical way.

**Theorem 6.1.5.** Assume that all solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, belong to L2 r(a, a- ) or to L<sup>2</sup> r(b- , b) for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup>, respectively. Then for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> all solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 belong to <sup>L</sup><sup>2</sup> r(a, a- ) or to L<sup>2</sup> r(b- , b), respectively.

Proof. It suffices to give the proof for the endpoint <sup>b</sup>. In the following, fix <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> and let u<sup>1</sup> := u1(·, λ0) and u<sup>2</sup> := u2(·, λ0) be two linearly independent solutions of (L − λ0)y = 0 such that

$$u\_1 \in L\_r^2(b', b) \quad \text{and} \quad u\_2 \in L\_r^2(b', b) \tag{6.1.17}$$

for some a<b- < b. Let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and let <sup>u</sup> := <sup>u</sup>(·, λ) be an arbitrary solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0. It will be shown that <sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b), which proves the theorem.

Assume without loss of generality that

$$
\begin{pmatrix} u\_1(b',\lambda\_0) & u\_2(b',\lambda\_0) \\ (pu\_1')(b',\lambda\_0) & (pu\_2')(b',\lambda\_0) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \dots
$$

Observe that (L − λ0)u = (λ − λ0)u, so that by the variation of constants formula there exist <sup>α</sup>1, α<sup>2</sup> <sup>∈</sup> <sup>C</sup> such that

$$\begin{aligned} u(x) &= \alpha\_1 u\_1(x) + \alpha\_2 u\_2(x) \\ &+ (\lambda - \lambda\_0) \left[ u\_1(x) \int\_{b'}^x u\_2(s) u(s) r(s) \, ds - u\_2(x) \int\_{b'}^x u\_1(s) u(s) r(s) \, ds \right] \end{aligned}$$

for all x ∈ (a, b). Define

α = max {|α1|, |α2|}, ϕ(x) = max {|u1(x)|, |u2(x)|}, x ∈ (a, b),

and note that <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b) by (6.1.17). It follows from the above representation of u that

$$|u(x)| \le 2\left[\alpha\varphi(x) + |\lambda - \lambda\_0|\varphi(x) \int\_{b'}^x \varphi(s)|u(s)|r(s) \, ds\right],$$

which leads to

$$|u(x)|^2 \le 8\alpha^2 \varphi(x)^2 + 8|\lambda - \lambda\_0|^2 \varphi(x)^2 \left(\int\_{b'}^x \varphi(s)|u(s)|r(s) \, ds\right)^2.$$

An application of the Cauchy–Schwarz inequality gives

$$|u(x)|^2 \le 8\alpha^2 \varphi(x)^2 + B\varphi(x)^2 \int\_{b'}^x |u(s)|^2 r(s) \, ds,$$

where

$$B = 8|\lambda - \lambda\_0|^2 \int\_{b'}^{b} \varphi(s)^2 r(s) \, ds.$$

Since <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b) one can apply Lemma 6.1.4 with A = 8α<sup>2</sup> and B as above. This leads to <sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b), as claimed. - Limit-circle case and limit-point case. The following discussion is devoted to the construction of solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 that belong to <sup>L</sup><sup>2</sup> r(a, a- ) or L<sup>2</sup> r(b- , b). Here the case for L<sup>2</sup> r(b- , b) will be considered; the treatment for the case L<sup>2</sup> r(a, a- ) is entirely similar. For <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> let <sup>u</sup> := <sup>u</sup>(·, λ) and <sup>v</sup> := <sup>v</sup>(·, λ) be solutions of (L − λ)y = 0, and denote

$$\{u, v\}\_x = \frac{W\_x(u, \overline{v})}{\lambda - \overline{\lambda}}.$$

It is clear that u → {u, v}<sup>x</sup> is linear and v → {u, v}<sup>x</sup> is anti-linear; in addition, {u, v}<sup>x</sup> = {v, u}x. Fix a<c<b, then for any solution u of (L − λ)y = 0 one has

$$\int\_{c}^{x} |u(t)|^{2} \, r(t) \, dt = \{u, u\}\_{x} - \{u, u\}\_{c};\tag{6.1.18}$$

cf. (6.1.12). Hence, the function x → {u, u}<sup>x</sup> is nondecreasing on (c, b). In the following let u<sup>1</sup> := u1(·, λ) and u<sup>2</sup> := u2(·, λ) be two linearly independent solutions of (L − λ)y = 0 fixed by

$$
\begin{pmatrix} u\_1(c,\lambda) & u\_2(c,\lambda) \\ (pu\_1')(c,\lambda) & (pu\_2')(c,\lambda) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},\tag{6.1.19}
$$

so that W(u1, u2) = 1. Then it is clear that {u1, u1}<sup>c</sup> = 0, {u2, u2}<sup>c</sup> = 0, and that for x>c:

$$\begin{aligned} \{u\_1, u\_1\}\_x &= \int\_c^x |u\_1(t)|^2 \, r(t) \, dt > 0, \\ \{u\_2, u\_2\}\_x &= \int\_c^x |u\_2(t)|^2 \, r(t) \, dt > 0; \end{aligned} \tag{6.1.20}$$

cf. (6.1.18). For each <sup>ζ</sup> <sup>∈</sup> <sup>C</sup> define a solution <sup>u</sup> <sup>=</sup> <sup>u</sup>(·, λ) of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 in terms of the fundamental system u1, u<sup>2</sup> by u = ζu<sup>1</sup> + u2. Then for c<x<b one has

$$\{u,u\}\_x = \zeta \overline{\zeta} \{u\_1,u\_1\}\_x + \zeta \{u\_1,u\_2\}\_x + \overline{\zeta} \{u\_2,u\_1\}\_x + \{u\_2,u\_2\}\_x$$

and it easily follows from this identity that

$$\frac{\{u,u\}\_x}{\{u\_1,u\_1\}\_x} = \left( |\zeta - \zeta\_x|^2 - \frac{\{u\_1,u\_2\}\_x \{u\_2,u\_1\}\_x - \{u\_1,u\_1\}\_x \{u\_2,u\_2\}\_x}{\{u\_1,u\_1\}\_x \{u\_1,u\_1\}\_x} \right),$$

where <sup>ζ</sup><sup>x</sup> <sup>∈</sup> <sup>C</sup> is defined by

$$
\zeta\_x = -\frac{\{u\_2, u\_1\}\_x}{\{u\_1, u\_1\}\_x}.\tag{6.1.21}
$$

It is a direct consequence of the definition, (<sup>λ</sup> <sup>−</sup> <sup>λ</sup>)<sup>2</sup> <sup>=</sup> <sup>−</sup>4|Im <sup>λ</sup><sup>|</sup> <sup>2</sup>, and the Pl¨ucker identity (6.1.7) with f = u1, g = u2, h = u1, and k = u2, that

$$\begin{aligned} \frac{\{u\_1, u\_2\}\_x \{u\_2, u\_1\}\_x - \{u\_1, u\_1\}\_x \{u\_2, u\_2\}\_x}{\{u\_1, u\_1\}\_x \{u\_1, u\_1\}\_x} \\ &= \frac{W(u\_1, \overline{u}\_1) W(u\_2, \overline{u}\_2) - W(u\_1, \overline{u}\_2) W(u\_2, \overline{u}\_1)}{4|\mathrm{Im}\,\lambda|^2 (\{u\_1, u\_1\}\_x)^2} \\ &= \frac{W(u\_1, u\_2) W(\overline{u}\_1, \overline{u}\_2)}{4|\mathrm{Im}\,\lambda|^2 (\{u\_1, u\_1\}\_x)^2} \\ &= r\_x^2, \end{aligned}$$

where r<sup>x</sup> > 0 is defined for c<x<b by

$$r\_x = \frac{1}{2|\text{Im}\,\lambda|\{u\_1, u\_1\}\_x}.\tag{6.1.22}$$

Consequently, for all c<x<b the solution u = ζu<sup>1</sup> + u<sup>2</sup> of (L − λ)y = 0 satisfies the identity

$$\{u, u\}\_x = \{u\_1, u\_1\}\_x \left( |\zeta - \zeta\_x|^2 - r\_x^2 \right),\tag{6.1.23}$$

where <sup>ζ</sup><sup>x</sup> <sup>∈</sup> <sup>C</sup> and <sup>r</sup><sup>x</sup> <sup>&</sup>gt; 0 are given by (6.1.21) and (6.1.22), respectively. Since {u1, u1}<sup>x</sup> > 0 for all x>c by (6.1.20), it follows from (6.1.23) that the equation {u, u}<sup>x</sup> = 0 gives the circle with center ζ<sup>x</sup> and radius rx; and for ζ = ζ<sup>x</sup> one has {u, u}<sup>x</sup> <sup>=</sup> −{u1, u1}<sup>x</sup> <sup>r</sup><sup>2</sup> <sup>x</sup> < 0. Hence, by (6.1.23), one sees for u = ζu<sup>1</sup> + u<sup>2</sup> and c<x<b that

$$\{u, u\}\_x \le 0 \quad \Leftrightarrow \quad |\zeta - \zeta\_x| \le r\_x. \tag{6.1.24}$$

For each c<x<b the closed disk with center ζ<sup>x</sup> and radius r<sup>x</sup> will be denoted by <sup>D</sup>(ζx, rx). Now let c<x<sup>1</sup> < x<sup>2</sup> < b and assume that <sup>ζ</sup> <sup>∈</sup> <sup>D</sup>(ζx<sup>2</sup> , rx<sup>2</sup> ). Then it follows from (6.1.24) that {u, u}x<sup>2</sup> ≤ 0 where u = ζu<sup>1</sup> + u2. Recall that (6.1.18) with u = ζu<sup>1</sup> + u<sup>2</sup> implies that

$$\{u, u\}\_{x\_1} \le \{u, u\}\_{x\_2} \le 0.$$

By (6.1.24) this means that <sup>ζ</sup> <sup>∈</sup> <sup>D</sup>(ζx<sup>1</sup> , rx<sup>1</sup> ). In other words,

$$c < x\_1 < x\_2 < b \quad \Rightarrow \quad \mathbb{D}(\zeta\_{x\_2}, r\_{x\_2}) \subset \mathbb{D}(\zeta\_{x\_1}, r\_{x\_1}).\tag{6.1.25}$$

Therefore, the disks <sup>D</sup>(ζx, rx) tend to a limit disk as <sup>x</sup> <sup>→</sup> <sup>b</sup> or shrink to exactly one point. These are the limit-circle case and the limit-point case, respectively. In the limit-circle case r<sup>b</sup> = limx→<sup>b</sup> r<sup>x</sup> > 0 is the radius of the limit circle and its center is given by ζ<sup>b</sup> = limx→<sup>b</sup> ζx. In the limit-point case r<sup>b</sup> = limx→<sup>b</sup> r<sup>x</sup> = 0 and ζ<sup>b</sup> = limx→<sup>b</sup> ζ<sup>x</sup> show that the limit circle degenerates into one point: the limit point.

The main fact which goes together with Theorem 6.1.5 is that at each endpoint and for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> there is at least one nontrivial solution in <sup>L</sup><sup>2</sup> r(a, a- ) or in L2 r(b- , b). This leads to the following classification of the limit-circle and limit-point cases.

**Theorem 6.1.6.** Let L be the differential expression in (6.1.1) on (a, b) such that the conditions in (6.1.2) are satisfied, and let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then the following statements hold for the endpoint b:


$$\lim\_{x \to b} W\_x(u, \overline{u}) = 0$$

holds for each solution <sup>u</sup> of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 such that <sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b).

(iii) The limit-circle case prevails at the endpoint b if and only if every solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 belongs to <sup>L</sup><sup>2</sup> r(b- , b). In this case there exists a fundamental system u<sup>1</sup> and u<sup>2</sup> of (L−λ)y = 0 and a disk with center ζ<sup>b</sup> and radius r<sup>b</sup> > 0 so that with u = ζu<sup>1</sup> + u<sup>2</sup>

$$\lim\_{x \to b} W\_x(u, \overline{u}) = 0$$

holds for all ζ with |ζ − ζb| = rb.

There are similar statements for the endpoint a.

Proof. It suffices to give the proof for the endpoint b as the proof for the other endpoint is completely similar. Let u<sup>1</sup> := u1(·, λ) and u<sup>2</sup> := u2(·, λ) be two linearly independent solutions of (L − λ)y = 0 fixed by (6.1.19).

Step 1. Assume that b is in the limit-circle case. Then r<sup>b</sup> = limx→<sup>b</sup> r<sup>x</sup> > 0 is the radius of the limit circle and ζ<sup>b</sup> = limx→<sup>b</sup> ζ<sup>x</sup> is the center of the limit circle. It follows from (6.1.22) that

$$\lim\_{x \to b} \{u\_1, u\_1\}\_x = \frac{1}{2|\mathrm{Im}\,\lambda| \, r\_b}$$

and by (6.1.20) and the monotone convergence theorem one therefore obtains

$$\int\_{c}^{b} |u\_1(t)|^2 \, r(t) \, dt = \lim\_{x \to b} \{u\_1, u\_1\}\_x < \infty.$$

Moreover, for any ζ with |ζ − ζb| ≤ r<sup>b</sup> it follows from (6.1.18) with u = ζu<sup>1</sup> + u<sup>2</sup> and (6.1.24) that

$$\int\_c^x |\zeta u\_1(t) + u\_2(t)|^2 \, r(t) \, dt = \{u, u\}\_x - \{u, u\}\_c \le -\{u, u\}\_c.$$

Hence, the monotone convergence theorem implies that

$$\int\_{c}^{b} |\zeta u\_1(t) + u\_2(t)|^2 \, r(t) \, dt < \infty.$$

Therefore, u<sup>1</sup> and ζu<sup>1</sup> + u<sup>2</sup> form a fundamental system of (L − λ)y = 0 and both functions belong to L<sup>2</sup> r(b- , b).

Still assuming that b is in the limit-circle case, take the limit x → b in (6.1.23) to obtain

$$\{u, u\}\_b = \{u\_1, u\_1\}\_b \left( |\zeta - \zeta\_b|^2 - r\_b^2 \right).$$

Since {u1, u1}<sup>b</sup> > 0 it follows for |ζ − ζb| = r<sup>b</sup> that {u, u}<sup>b</sup> = 0, so that

$$\lim\_{x \to b} W\_x(u, \overline{u}) = 0.$$

Step 2. Assume that b is in the limit-point case. Then r<sup>b</sup> = limx→<sup>b</sup> r<sup>x</sup> = 0 and ζ<sup>b</sup> = limx→<sup>b</sup> ζ<sup>x</sup> show that the limit circle degenerates into the limit point. It follows from (6.1.22) that

$$\lim\_{x \to b} \{u\_1, u\_1\}\_x = \lim\_{x \to b} \frac{1}{2|\text{Im}\,\lambda|} = \infty,$$

and therefore

$$\int\_{c}^{b} |u\_1(t)|^2 \, r(t) \, dt = \lim\_{x \to b} \{u\_1, u\_1\}\_x = \infty.$$

Thus, there exists a solution <sup>v</sup> of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 such that <sup>v</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b).

Still assuming that b is in the limit-point case, observe that by (6.1.25) ζ<sup>b</sup> is inside all circles with center ζ<sup>x</sup> and radius rx, and therefore it follows from (6.1.24) that {uζ<sup>b</sup> , uζ<sup>b</sup> }<sup>x</sup> ≤ 0, where uζ<sup>b</sup> = ζbu<sup>1</sup> + u2. Hence, by (6.1.18),

$$\int\_c^x |u\_{\zeta\_b}(t)|^2 \, r(t) \, dt = \{u\_{\zeta\_b}, u\_{\zeta\_b}\}\_x - \{u\_{\zeta\_b}, u\_{\zeta\_b}\}\_c \le -\{u\_{\zeta\_b}, u\_{\zeta\_b}\}\_c,$$

and the monotone convergence theorem implies that

$$\int\_{c}^{b} |u\_{\zeta\_{b}}(t)|^{2} \, r(t) \, dt < \infty,$$

that is, <sup>u</sup>ζ<sup>b</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b). Thus, u<sup>1</sup> and uζ<sup>b</sup> form a fundamental system of (L−λ)y = 0 and every solution in L<sup>2</sup> r(b- , b) must be a multiple of uζ<sup>b</sup> . It follows from (6.1.22), (6.1.20), and (6.1.23) that for uζ<sup>b</sup> = ζbu<sup>1</sup> + u<sup>2</sup> and c<x<b

$$\begin{aligned} -\frac{1}{4|\mathrm{Im}\,\lambda|^2 \{u\_1, u\_1\}\_x} &= -\{u\_1, u\_1\}\_x \, r\_x^2 \\ &\le \{u\_1, u\_1\}\_x \left( |\zeta\_b - \zeta\_x|^2 - r\_x^2 \right) = \{u\_{\zeta\_b}, u\_{\zeta\_b}\}\_x \le 0. \end{aligned}$$

Since {u1, u1}<sup>x</sup> → ∞ as x → b, it follows from these inequalities that

$$\lim\_{x \to b} \{u\_{\zeta\_b}, u\_{\zeta\_b}\}\_x = 0.$$

In other words, limx→<sup>b</sup> <sup>W</sup>x(uζ<sup>b</sup> , uζ<sup>b</sup> ) = 0 and since every solution <sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b) of (L − λ)y = 0 is a multiple of uζ<sup>b</sup> it follows that

$$\lim\_{x \to b} W\_x(u, \overline{u}) = 0.$$

Step 3. It follows from Step 1 and Step 2 that both in the limit-circle case and in the limit-point case there is a nontrivial solution of (L − λ)y = 0 which belongs to L2 r(b- , b). Therefore, (i) has been shown.

According to Step 1, in the limit-circle case every solution of (L − λ)y = 0 belongs to L<sup>2</sup> r(b- , b). Conversely, if every solution of (L − λ)y = 0 belongs to L2 r(b- , b), then the limit-circle case must prevail. To see this, assume that the limitpoint case prevails. Then by Step 2 there is a nontrivial solution of (L − λ)y = 0 which does not belong to L<sup>2</sup> r(b- , b), which gives a contradiction. Therefore, (iii) has been shown.

According to Step 2, the limit-point case implies that there exists a nontrivial solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 which does not belong to <sup>L</sup><sup>2</sup> r(b- , b). Conversely, if there exists a nontrivial solution of (L−λ)<sup>y</sup> = 0 which does not belong to <sup>L</sup><sup>2</sup> r(b- , b), then the limit-point case must prevail. To see this, assume that the limit-circle case prevails. Then by Step 1 all solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 belong to <sup>L</sup><sup>2</sup> r(b- , b), which gives a contradiction. Therefore, (ii) has been shown. -

By means of Theorem 6.1.5 the alternative in Theorem 6.1.6 is shown to be independent of <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. The existence of these two possibilities at an endpoint is known as Weyl's alternative.

**Corollary 6.1.7.** Let L be the differential expression in (6.1.1) on (a, b) such that the conditions in (6.1.2) are satisfied. Then the following statements hold:


There are similar statements for the endpoint a.

In addition to the assertions in Corollary 6.1.7 note that if for some <sup>λ</sup> <sup>∈</sup> <sup>R</sup> every solution of (L−λ)<sup>y</sup> = 0 belongs to <sup>L</sup><sup>2</sup> r(b- , b), then the limit-circle case prevails for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Likewise, if for some <sup>λ</sup> <sup>∈</sup> <sup>R</sup> there is at most one nontrivial solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 which belongs to <sup>L</sup><sup>2</sup> r(b- , b), then the limit-point prevails for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

So far the Sturm–Liouville differential expression L in (6.1.1) was considered on the interval (a, b) under the conditions (6.1.2). At the end of this section some attention is paid to the following extra condition:

$$p(x) > 0 \quad \text{for almost all } x \in (a, b). \tag{6.1.26}$$

This sign condition will play an important role later in this chapter. Here are a couple of useful remarks.

Let <sup>v</sup> be a real solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0, <sup>λ</sup> <sup>∈</sup> <sup>R</sup>, and let α<β be a pair of consecutive zeros of v, i.e., v(α) = v(β) = 0 and v(x) = 0 for x ∈ (α, β). Assume the sign condition (6.1.26). Then

v(t) > 0 for α<t<β ⇒ (pv- )(α) > 0 and (pv- )(β) < 0,

and there is a similar implication when the signs are changed. It suffices to show the first inequality. By the uniqueness and existence theorem it is not possible to have (pv- )(α) = 0. Now assume that (pv- )(α) < 0; then since pv is absolutely continuous, there exists δ > 0 such that pv- < 0 on the interval (α − δ, α + δ). As p(x) > 0 almost everywhere on (a, b), one sees that v- (x) < 0 almost everywhere on (α−δ, α+δ). Since <sup>v</sup> is absolutely continuous one has <sup>v</sup>(x) = <sup>x</sup> <sup>α</sup> v- (t) dt, which implies that v(x) ≤ 0 for α<x<α+δ; a contradiction. Hence, it has been shown that (pv- )(α) > 0. The above implication will play a role in the following lemma, which is concerned with the Sturm comparison theory.

**Lemma 6.1.8.** Assume the additional sign condition (6.1.26) and let u and v be real solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 with <sup>λ</sup> <sup>∈</sup> <sup>R</sup>. Then the following statements are equivalent:


Let w be a real solution of (L − μ)y = 0 with μ>λ. Then between consecutive zeros of v there is at least one zero of w.

Proof. (i) ⇒ (ii) Let u and v be real solutions which are linearly independent so that x → Wx(u, v) is a nonzero constant. Let α<β be consecutive zeros of v and assume that v(t) > 0 for α<t<β. Then clearly

$$0 < W\_{\alpha}(u,v)W\_{\beta}(u,v) = u(\alpha)(pv')(\alpha)u(\beta)(pv')(\beta). \tag{6.1.27}$$

The inequality (6.1.27) and (pv- )(α) > 0 and (pv- )(β) < 0 together show that u(α)u(β) < 0. Hence, u has a zero between α and β. It is the only zero of u on this interval. For if not, a repetition of the argument applied to u would produce a zero of v between α and β, which is a contradiction.

(ii) ⇒ (i) This implication is clear.

In order to see the last statement, assume again that α<β are consecutive zeros of v and that v(t) > 0 for α<t<β. Let w be a real solution of (L−μ)y = 0 with μ>λ. Assume that w has no zeros between α and β, and that, in fact, w(t) > 0 for α<t<β. Then

$$W\_{\alpha}(w,v) = w(\alpha)(pv')(\alpha) \ge 0, \quad W\_{\beta}(w,v) = w(\beta)(pv')(\beta) \le 0,$$

since (pv- )(α) > 0 and (pv- )(β) < 0. Recall from (6.1.10) that on (α, β)

$$(W\_x(w,v))' = r(x) \left(\mu - \lambda\right) w(x) \, v(x), \quad \alpha < x < \beta, \ldots$$

and the right-hand side is positive almost everywhere on (α, β) which gives a contradiction. Thus, w has at least one zero between α and β. -

For later use the following simple observation is included.

**Lemma 6.1.9.** Assume that the sign condition (6.1.26) holds. Let f and g be real solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 with <sup>λ</sup> <sup>∈</sup> <sup>R</sup>, and assume that <sup>f</sup> does not vanish on (α, β) ⊂ (a, b). Then the function g/f is monotone on (α, β) and for x<y in (α, β) one has

$$\frac{g(y)}{f(y)} - \frac{g(x)}{f(x)} = c \int\_x^y \frac{1}{p(s)f(s)^2} \, ds,$$

where c = Wx(f,g).

Proof. A straightforward calculation shows that

$$\left(\frac{g}{f}\right)'(x) = \frac{W\_x(f,g)}{p(x)f(x)^2},\tag{6.1.28}$$

where Wx(f,g) is constant and the right-hand side has a constant sign on the interval (α, β), due to the condition (6.1.26). -

## **6.2 Maximal and minimal Sturm–Liouville differential operators**

In Section 6.1 it has been shown that the differential expression L in (6.1.1) may be applied to a complex function f on (a, b) when f, pf- ∈ AC(a, b). The conditions in (6.1.2) were used for the existence and uniqueness theorem for the corresponding initial value problems. In this section it will be shown that the differential expression L generates differential operators in the Hilbert space L<sup>2</sup> <sup>r</sup>(a, b), where the weight function r is positive almost everywhere on the open interval (a, b).

The maximal operator Tmax in L<sup>2</sup> r(a, b) associated with the differential expression L is defined by

$$\begin{aligned} T\_{\text{max}}f &= Lf = \frac{1}{r} \Big[ -(pf')' + qf \Big], \\ \text{dom}\, T\_{\text{max}} &= \left\{ f \in L\_r^2(a,b) : f, pf' \in AC(a,b), \, Lf \in L\_r^2(a,b) \right\}. \end{aligned} \tag{6.2.1}$$

Recall from (6.1.6) that for f,g ∈ AC(a, b) the Wronskian is defined as

$$W\_x(f, \overline{g}) = f(x)\overline{(pg')(x)} - (pf')(x)\overline{g(x)},\tag{6.2.2}$$

and it follows from (6.1.11) and the definition of Tmax that for all f,g ∈ dom Tmax the limits

$$\lim\_{x \to a} W\_x(f, \overline{g}) \quad \text{and} \quad \lim\_{x \to b} W\_x(f, \overline{g})$$

exist separately. Therefore, it is a consequence of (6.1.11) that the following Green identity

$$(T\_{\max}f,g)\_{L^{2}\_{r}(a,b)} - (f,T\_{\max}g)\_{L^{2}\_{r}(a,b)} = \lim\_{x \to b} W\_{x}(f,\overline{g}) - \lim\_{x \to a} W\_{x}(f,\overline{g}) \tag{6.2.3}$$

holds. The preminimal operator T<sup>0</sup> in L<sup>2</sup> <sup>r</sup>(a, b) is defined by

$$\begin{aligned} T\_0 f &= Lf = \frac{1}{r} \left[ -(pf')' + qf \right], \\ \text{dom}\, T\_0 &= \left\{ f \in \text{dom}\, T\_{\text{max}} \, : \, \text{supp}\, f \text{ is compact in } (a,b) \right\}. \end{aligned}$$

It follows from the Green formula (6.2.3) that the operator T<sup>0</sup> is symmetric. The following theorem concerning the minimal operator Tmin = T<sup>0</sup> of the operator T<sup>0</sup> is based on the existence and uniqueness result in Theorem 6.1.2.

**Theorem 6.2.1.** The closure Tmin = T<sup>0</sup> of T<sup>0</sup> is a densely defined closed symmetric operator in L<sup>2</sup> <sup>r</sup>(a, b) and it satisfies

$$T\_{\min} \subset (T\_{\min})^\* = T\_{\max},$$

and, consequently, Tmin = (Tmax )∗.

Proof. Step 1. This step is preparatory. Let [α, β] ⊂ (a, b) be a compact interval in which case the restriction of L to (α, β) is regular at α and β. Define the linear space D[α,β] as

$$\begin{aligned} \mathfrak{D}\_{[\alpha,\beta]} = \left\{ \varphi \in AC[\alpha,\beta] : p\varphi' \in AC[\alpha,\beta], \ L\varphi \in L^2\_r(\alpha,\beta), \\ \varphi(\alpha) = \varphi(\beta) = 0, \ (p\varphi')(\alpha) = (p\varphi')(\beta) = 0 \right\}, \end{aligned}$$

and the operator S[α,β] from L<sup>2</sup> <sup>r</sup>(α, β) into itself by

$$S\_{[\alpha,\beta]}\varphi = L\varphi, \quad \varphi \in \mathfrak{D}\_{[\alpha,\beta]}\dots$$

Denote by N the two-dimensional space of all solutions of Ly = 0 on [α, β]; thus if h ∈ N, then h, ph- <sup>∈</sup> AC[α, β] and Lh = 0. In particular, one has <sup>N</sup> <sup>⊂</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β). Now one shows that the Hilbert space L<sup>2</sup> r(α, β) admits the orthogonal decomposition

$$L\_r^2(\alpha, \beta) = \text{ran } S\_{[\alpha, \beta]} \oplus \mathfrak{M},\tag{6.2.4}$$

so that ran S[α,β] is automatically closed. For this, let g = S[α,β]ϕ with ϕ ∈ D[α,β]. Then for any solution u of Ly = 0 it follows from the Green identity that

$$\int\_{\alpha}^{\beta} g(x) \overline{u(x)} r(x) \, dx = W\_x(\varphi, \overline{u}) \, |\_{\alpha}^{\beta} = 0.$$

Hence, one concludes that <sup>g</sup> <sup>⊥</sup> <sup>N</sup> in <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β), which shows ran S[α,β] ⊂ N⊥. Conversely, assume that g ∈ N⊥. Let ϕ be the solution of the equation Lϕ = g that is uniquely determined by the initial conditions ϕ(β) = 0 and (pϕ- )(β) = 0; cf. Theorem 6.1.2. For any solution u of Ly = 0 it follows from the Green identity that

$$0 = \int\_{\alpha}^{\beta} g(x) \overline{u(x)} r(x) \, dx = -W\_{\alpha}(\varphi, \overline{u}).$$

By choosing the right initial conditions at α for the solution u it follows that ϕ(α) = 0 and (pϕ- )(α) = 0. Hence, g = Lϕ with ϕ ∈ D[α,β], i.e., g = S[α,β]ϕ. Thus, N<sup>⊥</sup> ⊂ ran S[α,β]. Hence, (6.2.4) has been shown.

Step 2. The previous step will be used in conjunction with the following observation. Let [α, β] ⊂ (a, b) be a compact subinterval. Then one has the equivalence:

$$f \in \text{dom}\, T\_0 \quad \text{and} \quad \text{supp}\, f \subset [\alpha, \beta] \quad \Leftrightarrow \quad f \in \mathfrak{D}\_{[\alpha, \beta]}.\tag{6.2.5}$$

The implication (⇒) about the restriction of f to [α, β] is clear by definition. Conversely, if f ∈ D[α,β], then f can be trivially extended to all of (a, b) and then the extension, also denoted by f, belongs to dom Tmax since f and pf are continuous across α and β. Hence, the implication (⇐) in (6.2.5) follows.

Step 3. The symmetric operator T<sup>0</sup> is densely defined. To see this, assume that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) satisfies (g,ϕ)L<sup>2</sup> <sup>r</sup>(a,b) = 0 for all ϕ ∈ dom T<sup>0</sup> and let u be any solution of Lu = g. For any compact interval [α, β] one has (g,ϕ)L<sup>2</sup> <sup>r</sup>(a,b) = 0 for all ϕ ∈ dom T<sup>0</sup> with support in [α, β]. Hence, for [α, β] fixed one has by means of Step 2 that

$$0 = (g, \varphi)\_{L^2\_r(a,b)} = \int\_{\alpha}^{\beta} (Lu)(x) \overline{\varphi(x)} r(x) \, dx = \int\_{\alpha}^{\beta} u(x) \overline{(L\varphi)(x)} r(x) \, dx$$

for all ϕ ∈ D[α,β]. According to Step 1 it follows that u ∈ N, i.e., g = Lu = 0 on [α, β]. Since [α, β] is arbitrary, it follows that g = 0 in L<sup>2</sup> <sup>r</sup>(a, b). Thus, T<sup>0</sup> is densely defined. In particular it follows from T<sup>0</sup> ⊂ (T0)<sup>∗</sup> that the operator T<sup>0</sup> is closable. Therefore, Tmin = T<sup>0</sup> is an operator.

Step 4. Observe first that Tmax ⊂ (T0)∗. In fact, if f ∈ dom Tmax , then for all ϕ ∈ dom T<sup>0</sup> one has

$$(T\_{\max}f,\varphi)\_{L^2\_r(a,b)} - (f,T\_0\varphi)\_{L^2\_r(a,b)} = (T\_{\max}f,\varphi)\_{L^2\_r(a,b)} - (f,T\_{\max}\varphi)\_{L^2\_r(a,b)} = 0$$

due to (6.2.3) and ϕ being zero in a neighborhood of a and of b; this proves the claim. It will now be shown that (T0)<sup>∗</sup> ⊂ Tmax . For this let {f,g} ∈ (T0)∗. Then for all ϕ ∈ dom T<sup>0</sup> one has

$$(g, \varphi)\_{L^2\_r(a,b)} = (f, T\_0 \varphi)\_{L^2\_r(a,b)}.$$

Now let [α, β] ⊂ (a, b) be any compact interval. Then for all ϕ ∈ dom T<sup>0</sup> with supp ϕ ⊂ [α, β] or, by Step 2, for all ϕ ∈ D[α,β] one has

$$\int\_{\alpha}^{\beta} g(x) \overline{\varphi(x)} r(x) \, dx = \int\_{\alpha}^{\beta} f(x) \overline{(L\varphi)(x)} r(x) \, dx.$$

On [α, β] choose u with u, pu- ∈ AC[α, β] such that Lu = g almost everywhere. Then

$$\int\_{\alpha}^{\beta} (Lu)(x) \overline{\varphi(x)} r(x) \, dx = \int\_{\alpha}^{\beta} f(x) \overline{(L\varphi)(x)} r(x) \, dx$$

and integration by parts of the left-hand side yields

$$\int\_{\alpha}^{\beta} u(x) \overline{(L\varphi)(x)} r(x) \, dx = \int\_{\alpha}^{\beta} f(x) \overline{(L\varphi)(x)} r(x) \, dx,$$

so that <sup>f</sup> <sup>−</sup><sup>u</sup> <sup>⊥</sup> ran <sup>S</sup>[α,β] in <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β). By Step 1 one sees that f −u ∈ N. It follows that f has the decomposition

$$f = h + u, \quad h = f - u,$$

where h, ph- , u, pu- ∈ AC[α, β] and Lh = 0. Therefore, f, pf- ∈ AC[α, β] and Lf = Lu = g almost everywhere on [α, β]. This is true for each compact subinterval [α, β] of (a, b) and hence f, pf- ∈ AC(a, b) and one has g = Lf almost everywhere on (a, b). Therefore, {f,g} ∈ <sup>T</sup>max and (T0)<sup>∗</sup> <sup>⊂</sup> <sup>T</sup>max . -

As a consequence of Theorem 6.2.1 and Theorem 1.7.11 one sees that the graph of Tmax has the componentwise sum decomposition

<sup>T</sup>max <sup>=</sup> <sup>T</sup>min <sup>+</sup> <sup>N</sup> <sup>λ</sup>(Tmax ) <sup>+</sup> <sup>N</sup> <sup>λ</sup>(Tmax ), λ <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, direct sums.

Observe that Nλ(Tmax ) consists of functions that solve the differential equation (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 and belong to <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). In particular, each of the defect numbers of Tmin is at most 2 and since the coefficient functions are real, it follows that the defect numbers are equal; cf. (6.1.4). If both endpoints are in the limit-circle case, then every solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0, <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, belongs to <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) by Theorem 6.1.5. If one endpoint is in the limit-circle case and the other endpoint is in the limit-point case, there is, up to scalar multiples, only one solution that belongs to L<sup>2</sup> <sup>r</sup>(a, b) by Theorem 6.1.6. This leads to the next corollary on the defect numbers of Tmin .

**Corollary 6.2.2.** Let Tmin be the minimal operator associated with the differential expression L in L<sup>2</sup> <sup>r</sup>(a, b). Then the following statements hold:


In addition to the cases in Corollary 6.2.2 there is the situation where both endpoints of (a, b) are in the limit-point case. Then the defect numbers of Tmin are (0, 0), which will become clear in Section 6.5; cf. Corollary 6.5.2.

The cases (i) and (ii) in Corollary 6.2.2 will be treated in Section 6.3 and Section 6.4, respectively, in terms of boundary triplets. These triplets will be defined by means of the Green formula

$$(T\_{\max}f,g)\_{L^{2}\_{r}(a,b)} - (f,T\_{\max}g)\_{L^{2}\_{r}(a,b)} = \lim\_{x \to b} W\_{x}(f,\overline{g}) - \lim\_{x \to a} W\_{x}(f,\overline{g}),\tag{6.2.6}$$

where for f,g ∈ dom Tmax each of the limits

$$\lim\_{x \to a} W\_x(f, \overline{g}) \quad \text{and} \quad \lim\_{x \to b} W\_x(f, \overline{g})$$

exists separately. These limits will be essential ingredients in defining "boundary values" of functions in dom Tmax . In particular, observe that Tmin = (Tmax )<sup>∗</sup> implies that dom Tmin consists of all f ∈ dom Tmax for which

$$\lim\_{x \to b} W\_x(f, \overline{g}) = \lim\_{x \to a} W\_x(f, \overline{g})$$

for all g ∈ dom Tmax . Hence, it follows from Proposition 6.1.3 that dom Tmin consists of all f ∈ dom Tmax for which the two separate limits must satisfy

$$\lim\_{x \to b} W\_x(f, \tilde{g}) = 0 \quad \text{and} \quad \lim\_{x \to a} W\_x(f, \tilde{g}) = 0 \tag{6.2.7}$$

for all g ∈ dom Tmax . An essential ingredient is the behavior of the Wronskian W(f,g) for f,g ∈ dom Tmax near an endpoint which is regular or in the limitcircle case. First the regular case is considered.

**Lemma 6.2.3.** Assume that L is regular at a or b. Then a or b is in the limit-circle case, respectively. In particular, if f ∈ dom Tmax , then the limits

$$\begin{aligned} f(a) &= \lim\_{x \to a} f(x), \quad (pf')(a) = \lim\_{x \to a} (pf')(x), \\ f(b) &= \lim\_{x \to b} f(x), \quad (pf')(b) = \lim\_{x \to b} (pf')(x), \end{aligned} \tag{6.2.8}$$

exist, respectively. Moreover, for f,g ∈ dom Tmax

$$\begin{aligned} \lim\_{x \to a} W\_x(f, \overline{g}) &= f(a) \overline{(pg')(a)} - (pf')(a) \overline{g(a)}, \\ \lim\_{x \to b} W\_x(f, \overline{g}) &= f(b) \overline{(pg')(b)} - (pf')(b) \overline{g(b)}, \end{aligned}$$

respectively.

Proof. Assume that L is regular at b. Then it follows from Theorem 6.1.2 that all solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0, <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, belong to AC(a, b] and hence are bounded near b. Since r is integrable at b, all solutions belongs to L<sup>2</sup> r(b- , b) and so the limitcircle case prevails at b by Theorem 6.1.6. Let f ∈ dom Tmax and let g = Tmax f. Then f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and f, pf- ∈ AC(a, b] by Theorem 6.1.2. This implies the statements. -

If the endpoint is in the limit-circle case but not regular, then the existence of the individual limits in (6.2.8) is not guaranteed. In that case the notion of quasi-derivative proves useful.

**Definition 6.2.4.** Let u and v be linearly independent real solutions of the equation (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> and assume that the solutions are normalized by W(u, v) = 1. Let f be a complex function on (a, b) for which f, pf- ∈ AC(a, b). Then the quasi-derivatives of f, induced by the normalized solutions u and v, are defined as complex functions on (a, b) given by

$$f^{[0]} := W(f, v) \quad \text{and} \quad f^{[1]} := -W(f, u). \tag{6.2.9}$$

Let f and g be functions on (a, b) for which f, pf- , g, pg- ∈ AC(a, b). Then it follows from the Pl¨ucker identity in (6.1.7) that

$$W\_x(f,g) = f^{[0]}(x)g^{[1]}(x) - f^{[1]}(x)g^{[0]}(x). \tag{6.2.10}$$

Note that the right-hand side has an appearance which is similar to the right-hand side of (6.1.6) and (6.2.2), but both f[0] and f[1] are made up of f and pf- . In the limit-circle case also the individual factors have limits.

**Lemma 6.2.5.** Let u and v be linearly independent real solutions of the equation (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> which are normalized by <sup>W</sup>(u, v)=1, and let f,g <sup>∈</sup> dom <sup>T</sup>max . If u, v <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, a- ), then the limits

$$f^{[0]}(a) = \lim\_{x \to a} f^{[0]}(x) \quad \text{and} \quad f^{[1]}(a) = \lim\_{x \to a} f^{[1]}(x)$$

exist, and consequently

$$\lim\_{x \to a} W\_x(f, \overline{g})(x) = f^{[0]}(a) \overline{g^{[1]}(a)} - f^{[1]}(a) \overline{g^{[0]}(a)}.$$

Likewise, if u, v <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b), then the limits

$$f^{[0]}(b) = \lim\_{x \to b} f^{[0]}(x) \quad \text{and} \quad f^{[1]}(b) = \lim\_{x \to b} f^{[1]}(x).$$

exist, and consequently

$$\lim\_{x \to b} W\_x(f, \overline{g})(x) = f^{[0]}(b) \overline{g^{[1]}(b)} - f^{[1]}(b) \overline{g^{[0]}(b)}.$$

Proof. Let φ be any real solution of (L − λ0)y = 0. Since f ∈ dom Tmax by assumption, it is clear that (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). It is easily seen from (6.1.9) that the following identity

$$W\_x(f, \phi) - W\_s(f, \phi) = \int\_s^x ((L - \lambda\_0)f)(t)\,\phi(t)\,r(t)\,dt$$

holds for all a<s<x<b. If, in addition, <sup>φ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, a- ), then

$$W\_a(f, \phi) = \lim\_{x \to a} W\_x(f, \phi)$$

exists as can be seen from the dominated convergence theorem. The assertions in the lemma then follow by taking <sup>φ</sup> <sup>=</sup> <sup>v</sup> and <sup>φ</sup> <sup>=</sup> <sup>−</sup>u, respectively. -

At the end of this section it is assumed that the coefficient functions satisfy (6.1.2) and that the endpoint a is in the limit-circle case. The following result is about solving the Sturm–Liouville equation (L − λ)f = g with initial conditions in terms of quasi-derivatives. The proof of this result has the same background as the proof of Theorem 6.1.2. Just write the equation as a first-order system, but now involving quasi-derivatives:

$$
\begin{pmatrix} f^{[0]} \\ f^{[1]} \end{pmatrix}' = (\lambda - \lambda\_0) \begin{pmatrix} uvr & v^2r \\ -u^2r & -uvr \end{pmatrix} \begin{pmatrix} f^{[0]} \\ f^{[1]} \end{pmatrix} + \begin{pmatrix} vrg \\ -urg \end{pmatrix}.
$$

The new initial value problem is equivalent to a Volterra integral equation which can be solved in the usual way by successive approximations when all data are locally integrable. Note that the condition <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) in the proposition implies that urg, vrg <sup>∈</sup> <sup>L</sup>1(a, a- ). There is a similar statement when the endpoint b is in the limit-circle case.

**Proposition 6.2.6.** Let a be in the limit-circle case. Let u, v be real solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, such that <sup>W</sup>(u, v)=1, and u, v <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, a- ). Let <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>C</sup> and let <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). Then for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> the initial value problem

$$(L - \lambda)f = g, \qquad f^{[0]}(a) = c\_1, \quad f^{[1]}(a) = c\_2,$$

has a unique solution f ∈ AC(a, b) for which pf- ∈ AC(a, b). In addition, the functions

$$
\lambda \mapsto f^{[0]}(a, \lambda) \quad \text{and} \quad \lambda \mapsto f^{[1]}(a, \lambda)
$$

are entire.

It follows that under the circumstances of Proposition 6.2.6 for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> the homogeneous equation (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>y</sup> = 0 has a fundamental system, i.e., for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> there are two solutions u1(·, λ) and u2(·, λ) of (L − λ)y = 0 which are linearly independent when it is required that

$$
\begin{pmatrix} u\_1^{[0]}(a,\lambda) & u\_2^{[0]}(a,\lambda) \\ u\_1^{[1]}(a,\lambda) & u\_2^{[1]}(a,\lambda) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix};\tag{6.2.11}
$$

cf. (6.2.10). Moreover, each of the entries of the left-hand side gives an entire function in λ.

The quasi-derivatives of an element f ∈ dom Tmax may be interpreted as coefficients of f in terms of a local expansion involving the square-integrable solutions u and v as follows.

**Lemma 6.2.7.** Let u and v be linearly independent real solutions of the equation (L−λ0)<sup>y</sup> = 0 for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> which are normalized by <sup>W</sup>(u, v)=1. Assume that <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max and let the quasi-derivatives <sup>f</sup>[0] and <sup>f</sup>[1] be defined as in (6.2.9). If u, v <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, a- ), then

$$f(x) = u(x) \left( f^{[0]}(a) + o(1) \right) + v(x) \left( f^{[1]}(a) + o(1) \right), \quad x \to a.$$

Likewise, if u, v <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b), then

$$f(x) = u(x) \left( f^{[0]}(b) + o(1) \right) + v(x) \left( f^{[1]}(b) + o(1) \right), \quad x \to b.$$

Proof. Consider the case of the endpoint a. Let the function g be defined by <sup>g</sup> = (Tmax <sup>−</sup> <sup>λ</sup>0)<sup>f</sup> and note that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). There exist unique α, β <sup>∈</sup> <sup>C</sup> such that

$$f(x) = u(x)\left(\alpha + \int\_{a}^{x} v(t)g(t)r(t) \, dt\right) + v(x)\left(\beta - \int\_{a}^{x} u(t)g(t)r(t) \, dt\right);\tag{6.2.12}$$

cf. the variation of constants formula (6.1.13). One sees, via the Cauchy–Schwarz inequality, that

$$\int\_{a}^{x} v(t)g(t)r(t) \, dt = o(1) \quad \text{and} \quad \int\_{a}^{x} u(t)g(t)r(t) \, dt = o(1), \quad x \to a.$$

In order to identify the parameters α and β use that

$$f^{[0]} = f(pv') - (pf')v \quad \text{and} \quad f^{[1]} = (pf')u - f(pu').$$

Now substitute (6.2.12) and its differentiated form

$$\begin{aligned} (pf')(x) &= (pu')(x) \left( \alpha + \int\_a^x v(t)g(t)r(t) \, dt \right) \\ &+ (pv')(x) \left( \beta - \int\_a^x u(t)g(t)r(t) \, dt \right), \end{aligned}$$

so that with the normalization W(u, v) = 1 it follows that

$$f^{[0]}(x) = \alpha + \int\_{a}^{x} v(t)g(t)r(t) \, dt \quad \text{and} \quad f^{[1]}(x) = \beta - \int\_{a}^{x} u(t)g(t)r(t) \, dt.$$

Hence, one obtains that α = f[0](a) and β = f[1](a).

Next consider the case of the endpoint <sup>b</sup>. Likewise, if u, v <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b), then there exist unique γ,δ <sup>∈</sup> <sup>C</sup> such that

$$f(x) = u(x)\left(\gamma - \int\_x^b v(t)g(t)r(t)\,dt\right) + v(x)\left(\delta + \int\_x^b u(t)g(t)r(t)\,dt\right),$$

where

$$\int\_{x}^{b} v(t)g(t)r(t) \, dt = o(1) \quad \text{and} \quad \int\_{x}^{b} u(t)g(t)r(t) \, dt = o(1), \quad x \to b.$$

In a similar way one can show that γ = f[0](b) and δ = f[1](b). -

## **6.3 Regular and limit-circle endpoints**

Let Tmax = (Tmin )<sup>∗</sup> be the maximal operator associated with the Sturm–Liouville differential expression L in (6.1.1) on the interval (a, b). This situation will be considered first under the assumption that both endpoints a and b are regular and then in the end of this section the endpoints in the limit-circle case are treated.

Assume that the endpoints a and b are regular, i.e., [a, b] is a compact interval and

$$\begin{cases} p(x) \neq 0, \ r(x) > 0, & \text{for almost all } x \in (a, b), \\ 1/p, q, r \in L^1(a, b); & \end{cases}$$

cf. Definition 6.1.1. It follows from Theorem 6.1.2 that there exists a fundamental system (u1(·, λ); u2(·, λ)) for the equation (L − λ)y = 0 with the initial conditions

$$
\begin{pmatrix} u\_1(a,\lambda) & u\_2(a,\lambda) \\ (pu\_1')(a,\lambda) & (pu\_2')(a,\lambda) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} . \tag{6.3.1}
$$

Then for i = 1, 2 each of the mappings λ → ui(x, λ) and λ → (pu- <sup>i</sup>)(x, λ) is entire for fixed x ∈ [a, b]. From (6.2.1) and Theorem 6.1.2 one also sees that every function f ∈ dom Tmax satisfies f, pf- ∈ AC[a, b] and the quantities f(a),(pf- )(a), f(b), and (pf- )(b) are well defined.

**Proposition 6.3.1.** Assume that the endpoints <sup>a</sup> and <sup>b</sup> are regular. Then {C<sup>2</sup> , Γ0, Γ1}, where

$$
\Gamma\_0 f = \begin{pmatrix} f(a) \\ f(b) \end{pmatrix} \quad \text{and} \quad \Gamma\_1 f = \begin{pmatrix} (pf')(a) \\ -(pf')(b) \end{pmatrix}, \quad f \in \text{dom}\, T\_{\text{max}}\,. \tag{6.3.2}
$$

is a boundary triplet for (Tmin )<sup>∗</sup> = Tmax . The self-adjoint extension A<sup>0</sup> corresponding to Γ<sup>0</sup> is the restriction of Tmax defined on

$$\text{dom}\,A\_0 = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f(a) = f(b) = 0 \right\}.$$

and the minimal operator Tmin is the restriction of Tmax defined on

$$\text{dom}\,T\_{\text{min}} = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f(a) = f(b) = (pf')(a) = (pf')(b) = 0 \right\}.$$

Moreover, for all λ ∈ ρ(A0) one has u2(b, λ) = 0. The corresponding γ-field and Weyl function are given by

$$\gamma(\lambda) = \begin{pmatrix} u\_1(\cdot,\lambda) & u\_2(\cdot,\lambda) \end{pmatrix} \frac{1}{u\_2(b,\lambda)} \begin{pmatrix} u\_2(b,\lambda) & 0 \\ -u\_1(b,\lambda) & 1 \end{pmatrix}, \quad \lambda \in \rho(A\_0),$$

and

$$M(\lambda) = \frac{1}{u\_2(b,\lambda)} \begin{pmatrix} -u\_1(b,\lambda) & 1\\ 1 & -(pu\_2')(b,\lambda) \end{pmatrix}, \quad \lambda \in \rho(A\_0).$$

Proof. Assume that the endpoints a and b are regular. First it will be shown that (6.3.2) defines a boundary triplet. For f,g ∈ dom Tmax one has by (6.2.6) and Lemma 6.2.3

$$\begin{aligned} (T\_{\max}f,g) - (f,T\_{\max}g) &= \lim\_{x \to b} W\_x(f,\overline{g}) - \lim\_{x \to a} W\_x(f,\overline{g}) \\ &= f(b)\overline{(pg')(b)} - (pf')(b)\overline{g(b)} - f(a)\overline{(pg')(a)} + (pf')(a)\overline{g(a)}, \end{aligned}$$

which implies that the abstract Green identity is satisfied with the choice of Γ<sup>0</sup> and <sup>Γ</sup><sup>1</sup> in (6.3.2). Furthermore, the mapping (Γ0, <sup>Γ</sup>1) : dom <sup>T</sup>max <sup>→</sup> <sup>C</sup><sup>4</sup> is surjective. To see this, choose <sup>α</sup> <sup>∈</sup> <sup>C</sup><sup>4</sup> and consider the initial value problem (6.1.5) with <sup>λ</sup> = 0, <sup>g</sup> = 0, and <sup>x</sup><sup>0</sup> <sup>=</sup> <sup>a</sup> with initial data <sup>α</sup>1, α<sup>2</sup> <sup>∈</sup> <sup>C</sup>. Then there is a unique solution f with f, pf- ∈ AC[a, b]. Now cut off this function so that it becomes a solution f<sup>a</sup> of a corresponding inhomogeneous equation which is trivial in a neighborhood of b; see Proposition 6.1.3. It is clear that f<sup>a</sup> ∈ dom Tmax and

$$
\Gamma\_0 f\_a = \begin{pmatrix} \alpha\_1 \\ 0 \end{pmatrix}, \quad \Gamma\_1 f\_a = \begin{pmatrix} \alpha\_2 \\ 0 \end{pmatrix}.
$$

A similar procedure at the other endpoint gives a function f<sup>b</sup> ∈ dom Tmax and

$$
\Gamma\_0 f\_b = \begin{pmatrix} 0 \\ \alpha\_3 \end{pmatrix}, \quad \Gamma\_1 f\_b = \begin{pmatrix} 0 \\ \alpha\_4 \end{pmatrix}.
$$

Taking h = f<sup>a</sup> + f<sup>b</sup> completes the argument. Thus, (6.3.2) defines a boundary triplet for (Tmin )∗.

The description of the domain of the self-adjoint extension A<sup>0</sup> is trivial. Since <sup>u</sup>2(a, λ) = 0 for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> it follows that <sup>u</sup>2(b, λ) = 0 for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0), as otherwise u2(·, λ) would be an eigenfunction for A0, which contradicts λ ∈ ρ(A0). Furthermore, by Proposition 2.1.2 (ii) one has dom Tmin = ker Γ<sup>0</sup> ∩ ker Γ1, which implies the description of the domain of the minimal operator.

In order to compute the γ-field and Weyl function corresponding to the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} note that in terms of the fundamental system determined by (6.3.1) every element in Nλ(Tmax ) has the form

$$f(\cdot,\lambda) = \begin{pmatrix} u\_1(\cdot,\lambda) & u\_2(\cdot,\lambda) \end{pmatrix} \begin{pmatrix} c\_1\\ c\_2 \end{pmatrix}, \quad c\_1, c\_2 \in \mathbb{C}.$$

It follows from Definition 2.3.1 that γ(λ) is given by

$$\left\{ \begin{pmatrix} 1 & 0\\ u\_1(b,\lambda) & u\_2(b,\lambda) \end{pmatrix} \begin{pmatrix} c\_1\\ c\_2 \end{pmatrix}, \begin{pmatrix} u\_1(\cdot,\lambda) & u\_2(\cdot,\lambda) \end{pmatrix} \begin{pmatrix} c\_1\\ c\_2 \end{pmatrix} \right\}$$

for all pairs <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>C</sup> and, likewise, it follows from Definition 2.3.4 that <sup>M</sup>(λ) is given by

$$\begin{Bmatrix} \begin{pmatrix} 1 & 0\\ u\_1(b,\lambda) & u\_2(b,\lambda) \end{pmatrix} \begin{pmatrix} c\_1\\ c\_2 \end{pmatrix}, \begin{pmatrix} 0 & 1\\ -(pu\_1')(b,\lambda) & -(pu\_2')(b,\lambda) \end{pmatrix} \begin{pmatrix} c\_1\\ c\_2 \end{pmatrix} \end{Bmatrix}$$

for all pairs <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>C</sup>. Since <sup>u</sup>2(b, λ) = 0 for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) the stated results follow. In particular, the last result on the form of the Weyl function M follows as the Wronskian of u<sup>1</sup> and u<sup>2</sup> is constant and equal to one; cf. (6.3.1). -

Note that the self-adjoint operator A<sup>0</sup> in Proposition 6.3.1 corresponds to Dirichlet boundary conditions and the self-adjoint operator A<sup>1</sup> defined on ker Γ<sup>1</sup> corresponds to Neumann boundary conditions. In the next corollary the boundary condition at the endpoint b is fixed as the Dirichlet condition f(b) = 0. The corresponding boundary triplet appears as a restriction of the boundary triplet in Proposition 6.3.1. Corollary 6.3.2 can be seen as an immediate consequence of Proposition 6.3.1 and Proposition 2.5.12 applied to the decomposition

$$\mathbb{C}^2 = \mathcal{G} = \mathcal{G}' \oplus \mathcal{G}'', \quad \text{with } \mathcal{G}' = \text{span}\begin{pmatrix} 1 \\ 0 \end{pmatrix} \text{ and } \mathcal{G}'' = \text{span}\begin{pmatrix} 0 \\ 1 \end{pmatrix}.$$

A short direct argument will be given.

**Corollary 6.3.2.** Assume that the endpoints a and b are regular. Let the operator T- min be the extension of Tmin defined on

$$\text{dom}\,T\_{\text{min}}^{\prime} = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f(a) = (pf^{\prime})(a) = f(b) = 0 \right\}.$$

Then T- min is a densely defined closed symmetric operator with defect numbers (1, 1) and Tmin ⊂ T- min ⊂ A0. The adjoint (T- min )<sup>∗</sup> is defined on

$$\text{dom}\,(T\_{\text{min}}')^\* = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f(b) = 0 \right\}.\tag{6.3.3}$$

Then {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>}, where

$$
\Gamma\_0'f = f(a) \quad \text{and} \quad \Gamma\_1'f = (pf')(a), \quad f \in \text{dom}\left(T\_{\text{min}}'\right)^\*,\tag{6.3.4}
$$

is a boundary triplet for (T- min )∗. Moreover, for λ ∈ ρ(A0) the corresponding γ-field and the Weyl function are given by

$$\gamma'(\cdot,\lambda) = u\_1(\cdot,\lambda) - \frac{u\_1(b,\lambda)}{u\_2(b,\lambda)} u\_2(\cdot,\lambda) \quad \text{and} \quad M'(\lambda) = -\frac{u\_1(b,\lambda)}{u\_2(b,\lambda)}.$$

Proof. One verifies that the adjoint (T- min )<sup>∗</sup> is a restriction of Tmax given by the boundary condition in (6.3.3) and that {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} (6.3.4) is a boundary triplet for (T- min )∗. To see the last statement let

$$f(\cdot,\lambda) = \alpha(\lambda)u\_1(\cdot,\lambda) + \beta(\lambda)u\_2(\cdot,\lambda) \in \ker\left( (T'\_{\min})^\* - \lambda \right),$$

where α(λ)u1(b, λ) + β(λ)u2(b, λ) = 0. For λ ∈ ρ(A0) one obtains

$$\gamma'(\lambda) = u\_1(\cdot, \lambda) + \frac{\beta(\lambda)}{\alpha(\lambda)} u\_2(\cdot, \lambda) \quad \text{and} \quad M'(\lambda) = \frac{\beta(\lambda)}{\alpha(\lambda)}$$

This completes the proof. -

.

**Example 6.3.3.** In the special case <sup>r</sup> <sup>=</sup> <sup>p</sup> = 1 and a constant <sup>q</sup> <sup>∈</sup> <sup>R</sup>, the Sturm– Liouville expression is Lf = −f-- + qf. For λ>q the fundamental system determined by (6.3.1) is given by

$$u\_1(x,\lambda) = \cos\left[\sqrt{\lambda - q}\left(x - a\right)\right], \quad u\_2(x,\lambda) = \frac{\sin\left[\sqrt{\lambda - q}\left(x - a\right)\right]}{\sqrt{\lambda - q}},$$

and if the square root √· is fixed such that Im <sup>√</sup> √ λ > 0 for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>) and <sup>λ</sup> <sup>≥</sup> 0 for <sup>λ</sup> <sup>∈</sup> [0, <sup>∞</sup>), the formula extends to <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ {0}. Hence the Weyl function in Proposition 6.3.1 is

$$M(\lambda) = \frac{\sqrt{\lambda - q}}{\sin\left[\sqrt{\lambda - q}\left(b - a\right)\right]} \begin{pmatrix} -\cos\left[\sqrt{\lambda - q}\left(b - a\right)\right] & 1\\ 1 & -\cos\left[\sqrt{\lambda - q}\left(b - a\right)\right] \end{pmatrix},$$

and the Weyl function in Corollary 6.3.2 is

$$M'(\lambda) = -\sqrt{\lambda - q} \cot \left[ \sqrt{\lambda - q} \left( b - a \right) \right].$$

The poles of M and Mare given by

$$\left\{ \frac{(k\pi)^2}{(b-a)^2} + q \, : \, k \in \mathbb{N} \right\}$$

and coincide with the eigenvalues of the self-adjoint extension A0.

**Proposition 6.3.4.** For λ ∈ ρ(A0) the resolvent of A<sup>0</sup> is an integral operator of the form

$$g\left((A\_0-\lambda)^{-1}g\right)(t) = \int\_a^b G\_0(t,s,\lambda)g(s)r(s)ds, \quad g \in L^2\_r(a,b),$$

where the Green function G0(t, s, λ) is given by

$$G\_0(t,s,\lambda) = \begin{cases} (u\_1(t,\lambda) + M'(\lambda)u\_2(t,\lambda))u\_2(s,\lambda), & a < s < t, \\ u\_2(t,\lambda)(u\_1(s,\lambda) + M'(\lambda)u\_2(s,\lambda)), & t < s < b. \end{cases}$$

Here M is the Weyl function in Corollary 6.3.2. The integral operator belongs to the Hilbert–Schmidt class. In particular, σ(A0) = σp(A0) and the multiplicity of each eigenvalue is 1.

Proof. A straightforward calculation shows that the function

$$\begin{aligned} f(t) &= \int\_a^b G\_0(t, s, \lambda) g(s) r(s) ds \\ &= (u\_1(t, \lambda) + M'(\lambda) u\_2(t, \lambda)) \int\_a^t u\_2(s, \lambda) g(s) r(s) \, ds \\ &+ u\_2(t, \lambda) \int\_t^b (u\_1(s, \lambda) + M'(\lambda) u\_2(s, \lambda)) g(s) r(s) \, ds \end{aligned}$$

satisfies the differential equation (L − λ)f = g. Since u2(a, λ) = 0, it is clear that f(a) = 0. Moreover, since M- (λ)u2(b, λ) = −u1(b, λ), one also has f(b) = 0. As <sup>f</sup> is continuous on [a, b] and <sup>r</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup>(a, b), it follows that <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and hence f ∈ dom A0. Therefore, (A<sup>0</sup> − λ)f = g and for λ ∈ ρ(A0) the resolvent of A<sup>0</sup> is of the form as stated.

Furthermore, one has

$$\int\_{a}^{b} \int\_{a}^{b} |G\_0(t, s, \lambda)|^2 r(s)r(t) \, ds \, dt < \infty$$

since <sup>G</sup>0(·, ·, λ) is continuous for <sup>a</sup> <sup>≤</sup> <sup>s</sup> <sup>≤</sup> <sup>t</sup> and <sup>t</sup> <sup>≤</sup> <sup>s</sup> <sup>≤</sup> <sup>b</sup> and <sup>r</sup> <sup>∈</sup> <sup>L</sup>1(a, b). Thus, (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a Hilbert–Schmidt operator and, in particular, <sup>σ</sup>(A0) = <sup>σ</sup>p(A0) holds. Since the eigenfunctions of A<sup>0</sup> are multiples of the solution u2(·, λ) it also follows that the eigenvalues of A<sup>0</sup> have multiplicity one. -

It is easy to see that the closed symmetric operators Tmin and T- min in Proposition 6.3.1 and Corollary 6.3.2 do not have eigenvalues. Therefore, since the spectrum of A<sup>0</sup> is purely discrete according to Proposition 6.3.4, the next corollary is immediate from Proposition 3.4.8.

**Corollary 6.3.5.** Tmin and T- min are simple symmetric operators in L<sup>2</sup> <sup>r</sup>(a, b).

It follows from Corollary 6.3.5 and the results in Section 3.5 that the spectrum of A<sup>0</sup> can be characterized with the help of the Weyl functions M and M in Proposition 6.3.1 and Corollary 6.3.2. More precisely, in the present situation the functions

$$M(\lambda) = \frac{1}{u\_2(b,\lambda)} \begin{pmatrix} -u\_1(b,\lambda) & 1\\ 1 & -(pu\_2')(b,\lambda) \end{pmatrix} \quad \text{and}\quad M'(\lambda) = -\frac{u\_1(b,\lambda)}{u\_2(b,\lambda)}$$

are defined and holomorphic on ρ(A0), and one has λ ∈ σ(A0) = σp(A0) if and only if λ is a pole of M and M- . In particular, it follows that λ ∈ σp(A0) if and only if u2(b, λ) = 0; this extends the observation that u2(b, λ) = 0 for all λ ∈ ρ(A0) in Proposition 6.3.1. The two linear maps

$$\tau : \ker \left( A\_0 - \lambda \right) \to \operatorname{ran} \mathcal{R}\_{\lambda}, \qquad f(\cdot, \lambda) \mapsto \Gamma\_1 f(\cdot, \lambda) = \begin{pmatrix} (pf')(a, \lambda) \\ -(pf')(b, \lambda) \end{pmatrix},$$

and

$$\tau': \ker\left(A\_0 - \lambda\right) \to \text{ran}\,\mathcal{R}'\_{\lambda}, \qquad f(\cdot, \lambda) \mapsto \Gamma'\_1 f(\cdot, \lambda) = (pf')(a, \lambda),$$

where

$$i\mathcal{R}\_\lambda \varphi = \lim\_{\varepsilon \downarrow 0} i\varepsilon M(\lambda + i\varepsilon)\varphi, \quad \varphi \in \mathbb{C}^2, \quad \text{and} \quad \mathcal{R}'\_\lambda = \lim\_{\varepsilon \downarrow 0} i\varepsilon M'(\lambda + i\varepsilon)$$

coincide with the residues of M and Mat λ, are bijective.

In the following some classes of extensions of Tmin and their spectral properties are briefly discussed. Let {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Proposition 6.3.1 with corresponding γ-field γ and Weyl function M. Recall first that the self-adjoint (maximal dissipative, maximal accumulative) extensions A<sup>Θ</sup> ⊂ Tmax of Tmin are in a one-to-one correspondence to the self-adjoint (maximal dissipative, maximal accumulative, respectively) relations Θ in C<sup>2</sup> via

$$\begin{split} \text{dom}\,A\_{\Theta} &= \left\{ f \in \text{dom}\,T\_{\text{max}} : \{ \Gamma\_0 f, \Gamma\_1 f \} \in \Theta \right\} \\ &= \left\{ f \in \text{dom}\,T\_{\text{max}} : \left\{ \begin{pmatrix} f(a) \\ f(b) \end{pmatrix}, \begin{pmatrix} (pf')(a) \\ -(pf')(b) \end{pmatrix} \right\} \in \Theta \right\}. \end{split} \tag{6.3.5}$$

In the following assume that Θ is a self-adjoint relation in C2, so that the operator A<sup>Θ</sup> is a self-adjoint realization of the Sturm–Liouville differential expression L in L2 <sup>r</sup>(a, b). By Corollary 1.10.9, the relation Θ in C<sup>2</sup> can be represented by means of 2 × 2 matrices A and B satisfying the conditions A∗B = B∗A, AB<sup>∗</sup> = BA<sup>∗</sup> and A∗A + B∗B = I = AA<sup>∗</sup> + BB∗, namely,

$$\Theta = \left\{ \{ \mathcal{A}\varphi, \mathcal{B}\varphi \} : \varphi \in \mathbb{C}^2 \right\} = \left\{ \{ \psi, \psi' \} : \mathcal{A}^\* \psi' = \mathcal{B}^\* \psi \right\}.$$

In that case one has

$$\operatorname{dom} A\_{\Theta} = \left\{ f \in \operatorname{dom} T\_{\max} \, : \, \mathcal{A}^\* \begin{pmatrix} (pf')(a) \\ -(pf')(b) \end{pmatrix} = \mathcal{B}^\* \begin{pmatrix} f(a) \\ f(b) \end{pmatrix} \right\}.$$

Recall from Theorem 2.6.1 and Corollary 2.6.3 for λ ∈ ρ(AΘ) ∩ ρ(A0) the Kre˘ın formula for the corresponding resolvents

$$\begin{aligned} (A\_{\Theta} - \lambda)^{-1} &= (A\_0 - \lambda)^{-1} + \gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\* \\ &= (A\_0 - \lambda)^{-1} + \gamma(\lambda) \mathcal{A} \left(\mathcal{B} - M(\lambda) \mathcal{A}\right)^{-1} \gamma(\overline{\lambda})^\*. \end{aligned}$$

Since the spectrum of A<sup>0</sup> is discrete and the difference of the resolvents of A<sup>0</sup> and A<sup>Θ</sup> is an operator of rank ≤ 2, it is clear that the spectrum of the self-adjoint operator A<sup>Θ</sup> is discrete. Note that λ ∈ ρ(A0) is an eigenvalue of A<sup>Θ</sup> if and only if ker (Θ − M(λ)) or, equivalently, ker (B − M(λ)A) is nontrivial, and that

$$\ker\left(A\_{\Theta} - \lambda\right) = \gamma(\lambda)\ker\left(\Theta - M(\lambda)\right) = \gamma(\lambda)\mathcal{A}\ker\left(\mathcal{B} - M(\lambda)\mathcal{A}\right).$$

For a complete description of the (discrete) spectrum of A<sup>Θ</sup> recall that the symmetric operator Tmin is simple and make use of a transform of the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} as in Section 3.8. This reasoning implies that <sup>λ</sup> is an eigenvalue of A<sup>Θ</sup> if and only if λ is a pole of the function

$$\lambda \mapsto M\_{\Theta}(\lambda) = \left(\mathcal{A}^\* + \mathcal{B}^\* M(\lambda)\right) \left(\mathcal{B}^\* - \mathcal{A}^\* M(\lambda)\right)^{-1}.$$

It is important to note in this context that the multiplicity of the eigenvalues of A<sup>Θ</sup> is at most 2 and that the dimension of the eigenspace ker (A<sup>Θ</sup> − λ) coincides with the dimension of the range of the residue of M<sup>Θ</sup> at λ. In the special case where the self-adjoint relation Θ in <sup>C</sup><sup>2</sup> is a 2 <sup>×</sup> 2 matrix the boundary condition in (6.3.5) reads

$$\text{dom}\,A\_{\Theta} = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, \Theta \begin{pmatrix} f(a) \\ f(b) \end{pmatrix} = \begin{pmatrix} (pf')(a) \\ -(pf')(b) \end{pmatrix} \right\},$$

and according to Section 3.8 the spectral properties of the self-adjoint operator A<sup>Θ</sup> can also be described with the help of the function

$$
\lambda \mapsto \left(\Theta - M(\lambda)\right)^{-1};\tag{6.3.6}
$$

that is, the poles of the matrix function (6.3.6) coincide with the (discrete) spectrum of A<sup>Θ</sup> and the dimension of the eigenspace ker (A<sup>Θ</sup> − λ) coincides with the dimension of the range of the residue of the function in (6.3.6) at λ.

In what follows some special types of boundary conditions will be discussed. **Example 6.3.6.** Let {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Proposition 6.3.1 with corresponding γ-field γ and Weyl function M. Consider a 2 × 2 diagonal matrix

$$
\Theta = \begin{pmatrix} \alpha & 0 \\ 0 & \beta \end{pmatrix}, \qquad \alpha, \beta \in \mathbb{R}.
$$

The domain of the corresponding self-adjoint Sturm–Liouville operator A<sup>Θ</sup> is given by

$$\text{dom}\,A\_{\Theta} = \left\{ f \in \text{dom}\,T\_{\text{max}} : \alpha f(a) = (pf')(a), \,\beta f(b) = -(pf')(b) \right\}.$$

Such boundary conditions are often called separated boundary conditions. The eigenvalues of A<sup>Θ</sup> have multiplicity one and they coincide with the poles of the function

$$
\lambda \mapsto v(\lambda) \begin{pmatrix} \beta u\_2(b,\lambda) + (p u\_2')(b,\lambda) & 1 \\ 1 & u\_1(b,\lambda) + \alpha u\_2(b,\lambda) \end{pmatrix},
$$

where

$$v(\lambda) = \frac{u\_2(b, \lambda)}{(u\_1(b, \lambda) + \alpha u\_2(b, \lambda))(\beta u\_2(b, \lambda) + (pu\_2')(b, \lambda)) - 1}.$$

In the special case α = β = 0 the operator A<sup>Θ</sup> is defined on ker Γ1, which corresponds to Neumann boundary conditions. In this situation the poles of the function

$$
\lambda \mapsto -M(\lambda)^{-1} = \frac{1}{(pu\_1')(b,\lambda)} \begin{pmatrix} (pu\_2')(b,\lambda) & 1\\ 1 & u\_1(b,\lambda) \end{pmatrix},
$$

coincide with the Neumann eigenvalues.

Next the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} in Proposition 6.3.1 will be transformed so that the so-called periodic boundary conditions can be treated in a convenient way. For this consider the matrix

$$
\mathcal{W} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & -1 \\ -1 & -1 & 0 & 0 \end{pmatrix},
$$

and note that the condition (2.5.1) in Theorem 2.5.1 is satisfied. It then follows that {C<sup>2</sup>, <sup>Υ</sup>0, <sup>Υ</sup>1}, where

$$\Upsilon\_0 f = \frac{1}{\sqrt{2}} \begin{pmatrix} f(a) - f(b) \\ (pf')(a) - (pf')(b) \end{pmatrix}, \qquad f \in \text{dom}\, T\_{\text{max}}, \tag{6.3.7}$$

and

$$\Upsilon\_1 f = \frac{1}{\sqrt{2}} \begin{pmatrix} (pf')(a) + (pf')(b) \\ -f(a) - f(b) \end{pmatrix}, \qquad f \in \text{dom}\, T\_{\text{max}}, \tag{6.3.8}$$

is a boundary triplet for Tmax. In order to compute the corresponding γ-field γ<sup>Υ</sup> and Weyl function M<sup>Υ</sup> use Proposition 2.5.5 and note first that

$$
\begin{pmatrix} 0 & 0 \\ -1 & -1 \end{pmatrix} + \begin{pmatrix} 1 & -1 \\ 0 & 0 \end{pmatrix} M(\lambda) = \frac{1}{u\_2(b,\lambda)} \begin{pmatrix} -1 - u\_1(b,\lambda) & 1 + (pu\_2')(b,\lambda) \\ -u\_2(b,\lambda) & -u\_2(b,\lambda) \end{pmatrix}
$$

and

$$
\left( \begin{pmatrix} 1 & -1 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 1 & 1 \end{pmatrix} M(\lambda) \right)^{-1} = w(\lambda) \begin{pmatrix} 1 - (pu\_2')(b, \lambda) & u\_2(b, \lambda) \\ u\_1(b, \lambda) - 1 & u\_2(b, \lambda) \end{pmatrix},
$$

where

$$w(\lambda) = \frac{1}{2 - u\_1(b, \lambda) - (pu\_2')(b, \lambda)}$$

and <sup>M</sup> is the Weyl function corresponding to the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} in Proposition 6.3.1. Now it follows that the γ-field γ<sup>Υ</sup> and Weyl function M<sup>Υ</sup> of {C2, <sup>Υ</sup>0, <sup>Υ</sup>1} are given by

$$\gamma\_{\Upsilon}(\lambda) = \begin{pmatrix} u\_1(\cdot, \lambda) & u\_2(\cdot, \lambda) \end{pmatrix} w(\lambda) \begin{pmatrix} 1 - (pu\_2')(b, \lambda) & u\_2(b, \lambda) \\ (pu\_1')(b, \lambda) & 1 - u\_1(b, \lambda) \end{pmatrix} \tag{6.3.9}$$

and

$$M\_{\Upsilon}(\lambda) = w(\lambda) \begin{pmatrix} 2(pu\_1')(b,\lambda) & -u\_1(b,\lambda) + (pu\_2')(b,\lambda) \\ -u\_1(b,\lambda) + (pu\_2')(b,\lambda) & -2u\_2(b,\lambda) \end{pmatrix} . \tag{6.3.10}$$

As above, it follows from Corollary 6.3.5 and the considerations in Section 3.5 that the eigenvalues of the self-adjoint operator AΥ<sup>0</sup> which corresponds to the boundary condition ker Υ0, that is,

$$\text{dom}\,A\_{\Upsilon\_0} = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f(a) = f(b), \, (pf')(a) = (pf')(b) \right\},$$

coincide with the isolated poles of MΥ, that is, λ is an eigenvalue of AΥ<sup>0</sup> if and only if u1(b, λ)+(pu- <sup>2</sup>)(b, λ) = 2. It is important to note that in this situation eigenvalues of multiplicity two may arise and hence a reduction of the spectral problem to a scalar Weyl function as in Corollary 6.3.2 is, in general, not possible. This is the case in the following example, where a symmetric extension of Tmin appears which is not simple.

**Example 6.3.7.** Let {C<sup>2</sup>, <sup>Υ</sup>0, <sup>Υ</sup>1} be the boundary triplet in (6.3.7)–(6.3.8) with corresponding γ-field γ<sup>Υ</sup> and Weyl function M<sup>Υ</sup> in (6.3.9) and (6.3.10), respectively. Consider the Sturm–Liouville expression Lf = −f- in L<sup>2</sup>(0, 2π), that is, <sup>r</sup> <sup>=</sup> <sup>p</sup> = 1 and <sup>q</sup> = 0, and (a, b) = (0, <sup>2</sup>π). Then the positive eigenvalues <sup>k</sup><sup>2</sup>, <sup>k</sup> <sup>∈</sup> <sup>N</sup>, of A<sup>Υ</sup><sup>0</sup> are of multiplicity two and the eigenvalue 0 has multiplicity one. Consider the symmetric operator T-- min defined on

$$\text{dom}\,T\_{\text{min}}^{\prime\prime} = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f(0) = f(2\pi), \, f^{\prime}(0) = f^{\prime}(2\pi) = 0 \right\}$$

which arises in the same way as in Corollary 6.3.2, but now with the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} from Proposition 6.3.1 replaced by the boundary triplet {C2, <sup>Υ</sup>0, <sup>Υ</sup>1}. As in Corollary 6.3.2, one obtains that <sup>T</sup>-- min is a closed symmetric operator with defect numbers (1, 1) and adjoint (T-- min )<sup>∗</sup> defined on

$$\text{dom}\,(T\_{\text{min}}^{\prime\prime})^\* = \left\{ f \in \text{dom}\,T\_{\text{max}} \,:\, f'(0) = f'(2\pi) \right\},$$

and one has Tmin ⊂ T-- min <sup>⊂</sup> <sup>A</sup>Υ<sup>0</sup> . Moreover, {C, <sup>Υ</sup>- <sup>0</sup>, Υ- <sup>1</sup>}, where

$$\Upsilon\_0'f = \frac{1}{\sqrt{2}} \{ f(0) - f(2\pi) \} \text{ and } \Upsilon\_1'f = \frac{1}{\sqrt{2}} \{ f'(0) + f'(2\pi) \}, \ f \in \text{dom}\left(T\_{\text{min}}^{\prime\prime}\right)^\*,$$

is a boundary triplet for (T-- min )<sup>∗</sup> with corresponding Weyl function M- <sup>Υ</sup> given by

$$M'\_{\Upsilon}(\lambda) = \frac{2u\_1'(2\pi,\lambda)}{2 - u\_1(2\pi,\lambda) - u\_2'(2\pi,\lambda)}.$$

However, in contrast to the situation in Corollary 6.3.2 and Corollary 6.3.5, here T-- min is not simple. In fact, since the eigenvalues <sup>k</sup>2, <sup>k</sup> <sup>∈</sup> <sup>N</sup>, of <sup>A</sup>Υ<sup>0</sup> have multiplicity two and T-- min is a one-dimensional restriction of <sup>A</sup>Υ<sup>0</sup> each <sup>k</sup>2, <sup>k</sup> <sup>∈</sup> <sup>N</sup>, is an eigenvalue of T-- min of multiplicity one. Therefore, the corresponding eigenfunctions span an infinite-dimensional subspace of L2(0, 2π) which reduces T-- min and in which T-- min is self-adjoint. Note however, that a further one-dimensional restriction of T-- min leads to the minimal operator Tmin which is simple by Corollary 6.3.5.

This section is concluded with the limit-circle case. It is assumed that the coefficient functions satisfy (6.1.2) and that both endpoints a and b are in the limit-circle case; cf. Theorem 6.1.6. Now consider real solutions ua, v<sup>a</sup> and ub, v<sup>b</sup> of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which satisfy <sup>W</sup>(ua, va) = 1 and <sup>W</sup>(ub, vb) = 1, while <sup>u</sup>a, v<sup>a</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, a- ) and <sup>u</sup>b, v<sup>b</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(b- , b). In the following it is tacitly assumed that quasi-derivatives at a are defined in terms of ua, v<sup>a</sup> and that the quasi-derivatives at b are defined in terms of ub, vb.

Fix a fundamental system (u1(·, λ); u2(·, λ)) for the equation (L − λ)y = 0 by the initial conditions

$$
\begin{pmatrix} u\_1^{[0]}(a,\lambda) & u\_2^{[0]}(a,\lambda) \\ u\_1^{[1]}(a,\lambda) & u\_2^{[1]}(a,\lambda) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}.
$$

Recall that for each function <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max the quasi-derivaties <sup>f</sup>[0](a), <sup>f</sup>[1](a), f[0](b), f[1](b) are well defined; cf. Definition 6.2.4. The proof of the following proposition follows the lines of the proof of Proposition 6.3.1 in conjunction with Lemma 6.2.5 and Proposition 6.2.6.

**Proposition 6.3.8.** Assume that the endpoints a and b are in the limit-circle case. Then {C<sup>2</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0 f = \begin{pmatrix} f^{[0]}(a) \\ f^{[0]}(b) \end{pmatrix} \quad \text{and} \quad \Gamma\_1 f = \begin{pmatrix} f^{[1]}(a) \\ -f^{[1]}(b) \end{pmatrix}, \quad f \in \text{dom}\, T\_{\text{max}},
$$

is a boundary triplet for Tmax . The self-adjoint extension A<sup>0</sup> corresponding to Γ<sup>0</sup> is the restriction of Tmax defined on

$$\text{dom}\,A\_0 = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f^{[0]}(a) = f^{[0]}(b) = 0 \right\}.$$

and the minimal operator Tmin is the restriction of Tmax defined on

$$\text{dom}\,T\_{\text{min}} = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f^{[0]}(a) = f^{[0]}(b) = f^{[1]}(a) = f^{[1]}(b) = 0 \right\}$$

Moreover, for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) one has <sup>u</sup>[0] <sup>2</sup> (b, λ) = 0. The corresponding γ-field and Weyl function are given by

$$\gamma(\lambda) = \begin{pmatrix} u\_1(\cdot,\lambda) & u\_2(\cdot,\lambda) \end{pmatrix} \frac{1}{u\_2^{[0]}(b,\lambda)} \begin{pmatrix} u\_2^{[0]}(b,\lambda) & 0 \\ -u\_1^{[0]}(b,\lambda) & 1 \end{pmatrix}, \quad \lambda \in \rho(A\_0),$$

and

$$M(\lambda) = \frac{1}{u\_2^{[0]}(b,\lambda)} \begin{pmatrix} -u\_1^{[0]}(b,\lambda) & 1\\ 1 & -u\_2^{[1]}(b,\lambda) \end{pmatrix}, \quad \lambda \in \rho(A\mathbf{o}).$$

## **6.4 The case of one limit-point endpoint**

Let Tmax = (Tmin )<sup>∗</sup> be the maximal operator associated with the Sturm–Liouville differential expression L in (6.1.1) on the interval (a, b). This situation will be considered under the assumption that the endpoint a is regular and the endpoint b is in the limit-point case. In the end of the section also the situation that a is in the limit-circle case is briefly discussed.

Recall that the assumption that the endpoint a is regular means

$$\begin{cases} p(x) \neq 0, \ r(x) > 0, & \text{for a.e. } x \in (a, b), \\ 1/p, q, r \in L^1(a, a'), & 1/p, q, r \in L^1\_{\text{loc}}(a', b); \end{cases}$$

cf. Definition 6.1.1. Let the fundamental system (u1(·, λ); u2(·, λ)) for the equation (L − λ)y = 0 be fixed by the initial conditions

$$
\begin{pmatrix} u\_1(a,\lambda) & u\_2(a,\lambda) \\ (pu\_1')(a,\lambda) & (pu\_2')(a,\lambda) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} . \tag{6.4.1}
$$

.

For each function f ∈ dom Tmax one has f, pf- ∈ AC[a, b) and the quantities f(a) and (pf- )(a) are well defined; cf. Theorem 6.1.2.

**Proposition 6.4.1.** Assume that the endpoint a is regular and that the endpoint b is in the limit-point case. Then {C, <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0 f = f(a) \quad \text{and} \quad \Gamma\_1 f = (pf')(a), \quad f \in \text{dom}\, T\_{\text{max}}\,,\tag{6.4.2}
$$

is a boundary triplet for the operator (Tmin )<sup>∗</sup> = Tmax . The self-adjoint extension A<sup>0</sup> corresponding to Γ<sup>0</sup> is the restriction of Tmax defined on

$$\text{dom}\,A\_0 = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f(a) = 0 \right\}.$$

and the minimal operator Tmin is the restriction of Tmax defined on

$$\text{dom}\,T\_{\text{min}} = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f(a) = (pf')(a) = 0 \right\}.$$

Moreover, if <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>χ</sup>(·, λ) is a nontrivial element in <sup>N</sup>λ(Tmax ), then one has <sup>χ</sup>(a, λ) = 0. For all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the corresponding <sup>γ</sup>-field and Weyl function are given by

$$\gamma(\cdot,\lambda) = u\_1(\cdot,\lambda) + M(\lambda)u\_2(\cdot,\lambda) \quad \text{and} \quad M(\lambda) = \frac{(p\chi')(a,\lambda)}{\chi(a,\lambda)}.\tag{6.4.3}$$

Proof. First it will be verified that the mapping (Γ0, <sup>Γ</sup>1) : dom <sup>T</sup>max <sup>→</sup> <sup>C</sup><sup>2</sup> is surjective. Let <sup>α</sup> <sup>∈</sup> <sup>C</sup>2. Then there exists <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max such that <sup>f</sup>(a) = <sup>α</sup>1, (pf- )(a) = α2, which vanishes in a neighborhood of b. To see this, let h be a solution of Ly = 0 with h, ph- ∈ AC[a, b) and h(a) = α1, (ph- )(a) = α2. Now by cutting off the function h near b one obtains a function f which satisfies Lf = g for some <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and which vanishes in a neighborhood of b; see Proposition 6.1.3. Hence, f ∈ dom Tmax and at a one has

$$
\Gamma\_0 f = f(a) = h(a) = \,\,\alpha\_1 \quad \text{and} \quad \Gamma\_1 f = (pf')(a) = (ph')(a) = \alpha\_2.
$$

This proves the claim.

Next one verifies the abstract Green identity. For this one shows first that limx→<sup>b</sup> Wx(f,g) = 0 for all f,g ∈ dom Tmax . In fact, since the endpoint b is in the limit-point case it follows from Corollary 6.2.2 that Tmin has defect numbers (1, 1). Now choose h1, h<sup>2</sup> ∈ dom Tmax such that

$$h\_1(a) = 1, \quad (ph\_1')(a) = 0, \quad h\_2(a) = 0, \quad (ph\_2')(a) = 1,$$

and such that h<sup>1</sup> and h<sup>2</sup> vanish in a neighborhood of b; cf. Proposition 6.1.3. Then h1, h<sup>2</sup> ∈ dom Tmin, since otherwise

$$\lim\_{x \to a} W\_x(h\_i, \overline{g}) = h\_i(a) \overline{(pg')(a)} - (ph\_i')(a)\overline{g(a)} = 0, \quad i = 1, 2, 3$$

for all g ∈ dom Tmax by (6.2.7) and Lemma 6.2.3, which is not possible by the considerations in the beginning of the proof. Thus, every function f ∈ dom Tmax can be written in the form

$$f = f\_0 + c\_1 h\_1 + c\_2 h\_2, \qquad f\_0 \in \text{dom}\, T\_{\text{min}}\,,$$

for some <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>C</sup>. Observe that therefore

$$W\_x(f, \overline{g}) = W\_x(f\_0, \overline{g}) + W\_x(c\_1 h\_1 + c\_2 h\_2, \overline{g})$$

for all g ∈ dom Tmax , and since the last term vanishes in a neighborhood of b one obtains

$$\lim\_{x \to b} W(f, \overline{g}) = \lim\_{x \to b} W(f\_0, \overline{g}) = 0$$

for all g ∈ dom Tmax . Hence, by (6.2.6) and Lemma 6.2.3, for f,g ∈ dom Tmax one has

$$\begin{aligned} (T\_{\max}f,g)\_{L^2\_r(a,b)} - (f,T\_{\max}g)\_{L^2\_r(a,b)} &= -\lim\_{x \to a} W(f,\overline{g}) \\ &= (pf')(a)\overline{g(a)} - f(a)\overline{(pg')(a)}, \end{aligned}$$

which implies that the abstract Green identity is satisfied with the choice of Γ<sup>0</sup> and Γ<sup>1</sup> in (6.4.2). Thus, (6.4.2) defines a boundary triplet for (Tmin )<sup>∗</sup> = Tmax .

The description of the domain of the self-adjoint extension A<sup>0</sup> is trivial. Furthermore, by Proposition 2.1.2 (ii) one has dom Tmin = ker Γ<sup>0</sup> ∩ ker Γ1, which yields the stated description of the domain of the minimal operator.

Due to the assumption that the endpoint b is in the limit-point case, each eigenspace <sup>N</sup>λ(Tmax ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, has dimension one. Hence, if <sup>χ</sup>(·, λ) is a nontrivial element which spans <sup>N</sup>λ(Tmax ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, then, by Definition 2.3.4,

$$M(\lambda) = \left\{ \{ \chi(a, \lambda)c, (p\chi')(a, \lambda)c \} : c \in \mathbb{C} \right\}, \quad \lambda \in \mathbb{C} \mid \mathbb{R}.$$

Observe that <sup>χ</sup>(a, λ) = 0 for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, since otherwise <sup>χ</sup>(·, λ) <sup>∈</sup> ker (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>) and the fact that A<sup>0</sup> is self-adjoint would imply χ(·, λ) = 0. Consequently,

$$M(\lambda) = \frac{(p\chi')(a,\lambda)}{\chi(a,\lambda)}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Likewise, it follows from Definition 2.3.1 that

$$\gamma(\lambda) = \left\{ \{ \chi(a, \lambda)c, \chi(\cdot, \lambda)c \} : c \in \mathbb{C} \right\}, \quad \lambda \in \mathbb{C} \mid \mathbb{R},$$

and so

$$\gamma(\lambda) = \frac{\chi(\cdot,\lambda)}{\chi(a,\lambda)}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Writing χ(·, λ) in terms of the fundamental system,

$$
\chi(\cdot,\lambda) = \chi(a,\lambda)u\_1(\cdot,\lambda) + (p\chi')(a,\lambda)u\_2(\cdot,\lambda), \quad \lambda \in \mathbb{C} \backslash \mathbb{R},
$$

the form of the γ-field follows. -

Note that the γ-field and the Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} in Proposition 6.4.1 are defined and analytic on the resolvent set of the self-adjoint operator <sup>A</sup>0. The expressions in (6.4.3) extend from <sup>C</sup> \ <sup>R</sup> to all of <sup>ρ</sup>(A0) when <sup>ρ</sup>(A0)∩<sup>R</sup> <sup>=</sup> <sup>∅</sup>. In fact, it follows from the direct sum decomposition dom <sup>T</sup>max = dom <sup>A</sup><sup>0</sup> <sup>+</sup> <sup>N</sup>λ(Tmax) that also for each <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) <sup>∩</sup> <sup>R</sup> there exists a nontrivial element χ(·, λ) in Nλ(Tmax ) such that χ(a, λ) = 0. It is clear from (6.4.2) that also for these points <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) <sup>∩</sup> <sup>R</sup> the <sup>γ</sup>-field and Weyl function are given by (6.4.3).

**Example 6.4.2.** In the special case <sup>r</sup> <sup>=</sup> <sup>p</sup> = 1 on <sup>ι</sup> = (a, <sup>∞</sup>) and a constant <sup>q</sup> <sup>∈</sup> <sup>R</sup>, the Sturm–Liouville expression is Lf = −f--<sup>+</sup>qf. Fix the square root √· such that Im <sup>√</sup> λ > 0 for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>) and <sup>√</sup> <sup>λ</sup> <sup>≥</sup> 0 for <sup>λ</sup> <sup>∈</sup> [0, <sup>∞</sup>). For all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [q, <sup>∞</sup>), the function

$$x \mapsto \chi(x, \lambda) = e^{i\sqrt{\lambda - q} \cdot (x - a)} \in L^2(a, \infty)$$

spans the one-dimensional eigenspace Nλ(Tmax). Hence, the Weyl function M in Proposition 6.4.1 is given by

$$M(\lambda) = \frac{\Gamma\_1 \chi(\cdot, \lambda)}{\Gamma\_0 \chi(\cdot, \lambda)} = i\sqrt{\lambda - q} \cdot \lambda$$

In terms of the fundamental system

$$u\_1(x,\lambda) = \cos\left[\sqrt{\lambda - q}\left(x - a\right)\right], \quad u\_2(x,\lambda) = \frac{\sin\left[\sqrt{\lambda - q}\left(x - a\right)\right]}{\sqrt{\lambda - q}},$$

one has χ(x, λ) = u1(x, λ) + M(λ)u2(x, λ) = e<sup>i</sup> <sup>√</sup>λ−<sup>q</sup> (x−a) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [q, <sup>∞</sup>). Note also that <sup>M</sup> is holomorphic on <sup>C</sup> \ [q, <sup>∞</sup>) and that <sup>σ</sup>(A0)=[q, <sup>∞</sup>). Moreover, for λ ∈ (q, ∞) one has

$$\lim\_{\varepsilon \downarrow 0} \text{Im} \, M(\lambda + i\varepsilon) = \sqrt{\lambda - q} > 0,$$

and for λ ∈ [q, ∞)

$$\lim\_{\varepsilon \downarrow 0} i\varepsilon M(\lambda + i\varepsilon) = 0.$$

Below in Proposition 6.4.4 it is shown that Tmin is simple and hence the results in Section 3.5 and Section 3.6 apply. In particular, it follows from Theorem 3.6.5 that σac(A0)=[q, ∞) and Corollary 3.5.6 shows σp(A0) ∩ [q, ∞) = ∅ (see also Theorem 3.6.1).

**Proposition 6.4.3.** For λ ∈ ρ(A0) the resolvent of the self-adjoint extension A<sup>0</sup> is an integral operator of the form

$$\left(\left(A\_0-\lambda\right)^{-1}g\right)(t) = \int\_a^b G\_0(t,s,\lambda)g(s)r(s)ds, \quad g \in L^2\_r(a,b),\tag{6.4.4}$$

where the Green function G0(t, s, λ) is given by

$$G\_0(t,s,\lambda) = \begin{cases} (u\_1(t,\lambda) + M(\lambda)u\_2(t,\lambda))u\_2(s,\lambda), & a < s < t, \\ u\_2(t,\lambda)(u\_1(s,\lambda) + M(\lambda)u\_2(s,\lambda)), & t < s < b. \end{cases}$$

In particular, if <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) has compact support, then

$$\begin{split} \left( (A\_0 - \lambda)^{-1} g \right)(t) &= u\_2(t, \lambda) M(\lambda) \int\_a^b u\_2(s, \lambda) g(s) r(s) \, ds \\ &+ u\_1(t, \lambda) \int\_a^t u\_2(s, \lambda) g(s) r(s) \, ds \\ &+ u\_2(t, \lambda) \int\_t^b u\_1(s, \lambda) g(s) r(s) \, ds. \end{split} \tag{6.4.5}$$

Proof. As in the proof of Proposition 6.3.4, consider the function f(·, λ) given by the right-hand side in (6.4.4), which has the form

$$\begin{split} f(t,\lambda) &= \left( u\_1(t,\lambda) + M(\lambda)u\_2(t,\lambda) \right) \int\_a^t u\_2(s,\lambda)g(s)r(s) \, ds \\ &+ u\_2(t,\lambda) \int\_t^b \left( u\_1(s,\lambda) + M(\lambda)u\_2(s,\lambda) \right) g(s)r(s) \, ds \end{split} \tag{6.4.6}$$

for <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). Note that f(·, λ) is well defined, since u1(·, λ) + M(λ)u2(·, λ) belongs to L<sup>2</sup> <sup>r</sup>(a, b) by (6.4.3). A straightforward computation shows that f(·, λ) is a solution of the inhomogeneous differential equation (L − λ)f = g satisfying the initial conditions

$$\begin{aligned} f(a,\lambda)&=0,\\ (pf')(a,\lambda)&=(pu\_2')(a,\lambda)\int\_a^b (u\_1(s,\lambda)+M(\lambda)u\_2(s,\lambda))g(s)r(s)\,ds\\ &=\int\_a^b g(s)\overline{(u\_1(s,\overline{\lambda})+M(\overline{\lambda})u\_2(s,\overline{\lambda}))}\,r(s)\,ds\\ &=(g,\gamma(\overline{\lambda}))\_{L^2\_r(a,b)}.\end{aligned}$$

On the other hand, since <sup>A</sup><sup>0</sup> <sup>⊂</sup> <sup>T</sup>max , it is clear that the function <sup>h</sup> = (A<sup>0</sup> <sup>−</sup>λ)−1<sup>g</sup> also satisfies the inhomogeneous differential equation (L−λ)h = g and, moreover,

$$\begin{aligned} h(a) &= \Gamma\_0 h = \Gamma\_0 (A\_0 - \lambda)^{-1} g = \left( (A\_0 - \lambda)^{-1} g \right)(a) = 0, \\ \tau(ph')(a) &= \Gamma\_1 h = \Gamma\_1 (A\_0 - \lambda)^{-1} g = \gamma(\overline{\lambda})^\* g = (g, \gamma(\overline{\lambda}))\_{L^2\_r(a,b)}, \end{aligned}$$

where Proposition 2.3.2 (iv) was used. Hence, f = h by the uniqueness property for the initial value problem. This proves (6.4.4). The formula (6.4.5) follows for <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with compact support from (6.4.6). -

**Proposition 6.4.4.** The minimal operator Tmin is simple.

Proof. It suffices to show that the defect spaces Nλ(Tmax ) span the space L<sup>2</sup> <sup>r</sup>(a, b); cf. Corollary 3.4.5. To see this, let <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and assume that

$$\int\_{a}^{b} \left( u\_1(t, \lambda) + M(\lambda) u\_2(t, \lambda) \right) g(t) r(t) \, dt = 0$$

for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then it follows from (6.4.4) that

$$\begin{aligned} \left( (A\_0 - \lambda)^{-1} g \right)(t) &= \left( u\_1(t, \lambda) + M(\lambda) u\_2(t, \lambda) \right) \int\_a^t u\_2(s, \lambda) g(s) r(s) \, ds \\ &+ u\_2(t, \lambda) \int\_t^b \left( u\_1(s, \lambda) + M(\lambda) u\_2(s, \lambda) \right) g(s) r(s) \, ds \\ &= \left( u\_1(t, \lambda) + M(\lambda) u\_2(t, \lambda) \right) \int\_a^t u\_2(s, \lambda) g(s) r(s) \, ds \\ &- u\_2(t, \lambda) \int\_a^t \left( u\_1(s, \lambda) + M(\lambda) u\_2(s, \lambda) \right) g(s) r(s) \, ds \\ &= \int\_a^t \left( u\_1(t, \lambda) u\_2(s, \lambda) - u\_2(t, \lambda) u\_1(s, \lambda) \right) g(s) r(s) \, ds \end{aligned}$$

and the right-hand side is entire in <sup>λ</sup>. Now consider a bounded interval <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> such that the endpoints of <sup>δ</sup> are not eigenvalues of <sup>A</sup><sup>0</sup> and let <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) be a function with compact support. It follows from Stone's formula (1.5.7) (see also Example A.1.4), Fubini's theorem, and dominated convergence that, for any <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with compact support,

$$\begin{aligned} & \left( E(\delta)g, h \right)\_{L^2\_r(a,b)} \\ &= \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \left( \left( \left( A\_0 - (\mu + i\varepsilon) \right)^{-1} - \left( A\_0 - (\mu - i\varepsilon) \right)^{-1} \right) g, h \right)\_{L^2\_r(a,b)} d\mu \\ &= 0. \end{aligned}$$

This implies that <sup>E</sup>(δ)<sup>g</sup> = 0 for all bounded intervals <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> as above and letting δ expand to R one concludes that g = E(R)g = 0. -

Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Proposition 6.4.1 with <sup>γ</sup>-field and Weyl function given by

$$\gamma(\cdot,\lambda) = u\_1(\cdot,\lambda) + M(\lambda)u\_2(\cdot,\lambda) \quad \text{and} \quad M(\lambda) = \frac{(p\chi')(a,\lambda)}{\chi(a,\lambda)},$$

where χ(·, λ) is a nontrivial element in Nλ(Tmax ), λ ∈ ρ(A0). Since the operator Tmin is simple by Proposition 6.4.4, Theorem 3.6.1 shows that the Weyl function M is analytic at λ if and only if λ ∈ ρ(A0), that λ ∈ σp(A0) if and only if limε↓<sup>0</sup> iεM(λ + iε) = 0, the poles of M coincide with the isolated eigenvalues of A0, and λ ∈ σc(A0) if and only if limε↓<sup>0</sup> iεM(λ + iε) = 0 and M does not admit an analytic continuation to λ. Furthermore, if Δ is an open interval in R, then

$$\overline{\sigma\_{\rm ac}(A\_0) \cap \Delta} = \operatorname{clos}\_{\rm ac} \left( \left\{ \lambda \in \Delta : 0 < \operatorname{Im} M(\lambda + i0) < +\infty \right\} \right).$$

In the special case Δ = R one has

$$
\sigma\_{\rm ac}(A\_0) = \text{clos}\_{\rm ac} \left( \left\{ \lambda \in \mathbb{R} : 0 < \text{Im} \, M(\lambda + i0) < +\infty \right\} \right).
$$

Furthermore, with the help of Corollary 3.6.9 one can exclude singular continuous spectrum as follows. If Δ is an open interval in R and there exist at most countably many λ ∈ Δ such that

$$\operatorname{Im} M(\lambda + i\varepsilon) \to +\infty, \quad \varepsilon M(\lambda + i\varepsilon) \to 0 \quad \text{as} \quad \varepsilon \downarrow 0,$$

then σsc(A0)∩ Δ = ∅. For more results on the description of singular and singular continuous spectra of A<sup>0</sup> in this context see Section 3.6.

Now consider the self-adjoint (maximal dissipative, maximal accumulative) extensions of <sup>T</sup>min . According to Corollary 2.1.4 for <sup>τ</sup> <sup>∈</sup> <sup>R</sup> (<sup>τ</sup> <sup>∈</sup> <sup>C</sup>+, <sup>τ</sup> <sup>∈</sup> <sup>C</sup>−), the realization A<sup>τ</sup> of L with domain

$$\text{dom}\,A\_{\tau} = \left\{ f \in \text{dom}\,T\_{\text{max}} : (pf')(a) = \tau f(a) \right\}$$

is self-adjoint (maximal dissipative, maximal accumulative), and the boundary condition τ = ∞ is understood as f(a) = 0, which corresponds to ker Γ0. For <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞} introduce the following transformation of the boundary triplet in (6.4.2):

$$
\begin{pmatrix} \Gamma\_0^\tau \\ \Gamma\_1^\tau \end{pmatrix} = \frac{1}{\sqrt{\tau^2 + 1}} \begin{pmatrix} \tau & -1 \\ 1 & \tau \end{pmatrix} \begin{pmatrix} \Gamma\_0 \\ \Gamma\_1 \end{pmatrix}. \tag{6.4.7}
$$

Then {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } is a boundary triplet defined on dom Tmax with corresponding γ-field and Weyl function given by

$$\gamma\_{\tau}(\lambda) = \frac{\gamma(\lambda)}{\tau - M(\lambda)} \sqrt{\tau^2 + 1} \quad \text{and} \quad M\_{\tau}(\lambda) = \frac{1 + \tau M(\lambda)}{\tau - M(\lambda)}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}; \tag{6.4.8}$$

cf. (2.5.19) and (2.5.20). Moreover, it is clear that

$$\ker \Gamma\_0^\tau = \ker \left(\Gamma\_1 - \tau \Gamma\_0\right) = \text{dom}\, A\_\tau,$$

and again τ = ∞ corresponds to the extension with boundary condition f(a) = 0. The spectrum of A<sup>τ</sup> can now be characterized with the help of the Weyl function M<sup>τ</sup> in the same way as the spectrum of the extension defined on ker Γ<sup>0</sup> (that is, τ = ∞) was characterized with the function M. E.g., λ is an eigenvalue of A<sup>τ</sup> if and only if limε↓<sup>0</sup> iεM<sup>τ</sup> (λ+iε) = 0, and the absolutely continuous spectrum of A<sup>τ</sup> is given by

$$\sigma\_{\rm ac}(A\_{\tau}) = \operatorname{clos}\_{\rm ac} \left( \left\{ \lambda \in \mathbb{R} : 0 < \operatorname{Im} M\_{\tau}(\lambda + i0) < +\infty \right\} \right).$$

The transformation for the γ-field in (6.4.7) and (6.4.8) also shows up in a transformation of the fundamental solutions.

**Lemma 6.4.5.** Let <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞}. For <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the <sup>γ</sup>-field <sup>γ</sup><sup>τ</sup> (λ) is of the form

$$
\gamma\_\tau(\cdot,\lambda) = v\_1(\cdot,\lambda) + M\_\tau(\lambda)v\_2(\cdot,\lambda).
$$

Here the fundamental system (v1(·, λ); v2(·, λ)) is given by

$$
\begin{pmatrix} v\_1(\cdot,\lambda) \\ v\_2(\cdot,\lambda) \end{pmatrix} = \frac{1}{\sqrt{\tau^2 + 1}} \begin{pmatrix} \tau & -1 \\ 1 & \tau \end{pmatrix} \begin{pmatrix} u\_1(\cdot,\lambda) \\ u\_2(\cdot,\lambda) \end{pmatrix},
$$

and one has W(v1(·, λ), v2(·, λ)) = 1.

Proof. Recall that γ(·, λ) = u1(·, λ) + M(λ)u2(·, λ). If v1(·, λ) and v2(·, λ) are as above, then it is clear that

$$
\begin{pmatrix} u\_1(\cdot,\lambda) \\ u\_2(\cdot,\lambda) \end{pmatrix} = \frac{1}{\sqrt{\tau^2 + 1}} \begin{pmatrix} \tau & 1 \\ -1 & \tau \end{pmatrix} \begin{pmatrix} v\_1(\cdot,\lambda) \\ v\_2(\cdot,\lambda) \end{pmatrix} \dots$$

Hence,

$$\begin{split} \gamma(\lambda) &= \frac{1}{\sqrt{\tau^2 + 1}} \Big( \tau v\_1(\cdot, \lambda) + v\_2(\cdot, \lambda) + M(\lambda) \Big( -v\_1(\cdot, \lambda) + \tau v\_2(\cdot, \lambda) \Big) \Big) \\ &= \frac{1}{\sqrt{\tau^2 + 1}} \Big( (\tau - M(\lambda)) v\_1(\cdot, \lambda) + (1 + \tau M(\lambda)) v\_2(\cdot, \lambda) \Big) \\ &= \frac{\tau - M(\lambda)}{\sqrt{\tau^2 + 1}} \left( v\_1(\cdot, \lambda) + \frac{1 + \tau M(\lambda)}{\tau - M(\lambda)} v\_2(\cdot, \lambda) \right), \end{split}$$

where it was used that <sup>M</sup>(λ) <sup>=</sup> <sup>τ</sup> <sup>∈</sup> <sup>R</sup> for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. This leads to

$$\frac{\gamma(\lambda)}{\tau - M(\lambda)} \sqrt{\tau^2 + 1} = v\_1(\cdot, \lambda) + \frac{1 + \tau M(\lambda)}{\tau - M(\lambda)} v\_2(\cdot, \lambda).$$

Comparing with (6.4.8) one obtains the claimed form of γ<sup>τ</sup> (λ). -

Note that the formal solution

$$v\_2(\cdot,\lambda) = \frac{1}{\sqrt{\tau^2 + 1}} \left( u\_1(\cdot,\lambda) + \tau u\_2(\cdot,\lambda) \right) \tag{6.4.9}$$

satisfies the boundary condition (pv- <sup>2</sup>)(a, λ) = τv2(a, λ) which is connected with the self-adjoint realization A<sup>τ</sup> defined on ker Γ<sup>τ</sup> <sup>0</sup> = ker (Γ<sup>1</sup> − τΓ0). Observe that for <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and λ ∈ ρ(A<sup>τ</sup> ) the resolvent of A<sup>τ</sup> has the form

$$\left(\left(A\_{\tau}-\lambda\right)^{-1}g\right)(t) = \int\_{a}^{b} G\_{\tau}(t,s,\lambda)g(s)r(s)\,ds, \quad g \in L^{2}\_{r}(a,b),\tag{6.4.10}$$

where the Green function is given by

$$G\_{\tau}(t,s,\lambda) = \begin{cases} (v\_1(t,\lambda) + M\_{\tau}(\lambda)v\_2(t,\lambda))v\_2(s,\lambda), & a < s < t, \\ v\_2(t,\lambda)(v\_1(s,\lambda) + M\_{\tau}(\lambda)v\_2(s,\lambda)), & t < s < b, \end{cases} \tag{6.4.11}$$

$$\mathbb{D}$$

#### 6.4. The case of one limit-point endpoint 405

that is,

$$\begin{aligned} \left( (A\_{\tau} - \lambda)^{-1} g \right)(t) &= \left( v\_1(t, \lambda) + M\_{\tau}(\lambda) v\_2(t, \lambda) \right) \int\_{a}^{t} v\_2(s, \lambda) g(s) r(s) \, ds \\ &+ v\_2(t, \lambda) \int\_{t}^{b} \left( v\_1(s, \lambda) + M\_{\tau}(\lambda) v\_2(s, \lambda) \right) g(s) r(s) \, ds . \end{aligned}$$

This follows in the same way as in the proof of Proposition 6.4.3. In fact, a straightforward computation shows that the right-hand side (denoted by f(·, λ)) satisfies the differential equation (L − λ)f = g and that

$$\begin{aligned} f(a,\lambda) &= \frac{1}{\sqrt{\tau^2 + 1}} \int\_a^b \left( v\_1(s,\lambda) + M\_\tau(\lambda) v\_2(s,\lambda) \right) g(s) r(s) \, ds, \\\ f(pf')(a,\lambda) &= \frac{\tau}{\sqrt{\tau^2 + 1}} \int\_a^b \left( v\_1(s,\lambda) + M\_\tau(\lambda) v\_2(s,\lambda) \right) g(s) r(s) \, ds. \end{aligned}$$

Hence,

$$\frac{1}{\sqrt{\tau^2 + 1}} \left( \tau f(a, \lambda) - (pf')(a, \lambda) \right) = 0$$

and

$$\begin{aligned} \frac{1}{\sqrt{\tau^2 + 1}} \Big( f(a, \lambda) + \tau (pf')(a, \lambda) \Big) &= \int\_a^b \Big( v\_1(s, \lambda) + M\_\tau(\lambda) v\_2(s, \lambda) \Big) g(s) r(s) \, ds \\ &= (g, \gamma\_\tau(\tilde{\lambda}))\_{L^2\_r(a, b)}. \end{aligned}$$

Since <sup>h</sup> = (A<sup>τ</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>g</sup> also satisfies the equation (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)<sup>h</sup> <sup>=</sup> <sup>g</sup> and the same boundary condition Γ<sup>τ</sup> <sup>0</sup>h = 0 and Γ<sup>τ</sup> <sup>1</sup>h = (g, γ<sup>τ</sup> (λ))L<sup>2</sup> <sup>r</sup>(a,b), it follows that f = h.

In Theorem 6.4.7 below a unitary Fourier transform for the self-adjoint realization <sup>A</sup><sup>τ</sup> , <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞}, will be provided, which takes <sup>A</sup><sup>τ</sup> into multiplication by the independent variable in the space L<sup>2</sup> dσ<sup>τ</sup> (R). Here σ<sup>τ</sup> denotes the nondecreasing function in the integral representation

$$M\_{\tau}(\lambda) = \alpha\_{\tau} + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{1 + t^2} \right) d\sigma\_{\tau}(t), \quad \lambda \in \mathbb{C} \ \mathbb{R}, \tag{6.4.12}$$

of the Weyl function M<sup>τ</sup> . Observe that no linear term is present in the integral representation since A<sup>τ</sup> is not multivalued (this follows, e.g., from Lemma A.4.3 and Proposition 3.5.7). Recall that α<sup>τ</sup> is a real constant and that <sup>R</sup>(1+t <sup>2</sup>)−<sup>1</sup> dσ<sup>τ</sup> (t) is finite; cf. Theorem A.2.5

The following preparatory lemma shows that the condition (B.1.2) in Appendix B for the Fourier transform is satisfied.

**Lemma 6.4.6.** Let <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞} and let <sup>E</sup><sup>τ</sup> (·) be the spectral measure of the selfadjoint operator <sup>A</sup><sup>τ</sup> . For <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with compact support define the Fourier transform f by

$$
\widehat{f}(\mu) = \int\_a^b v\_2(s,\mu) f(s) r(s) \, ds, \quad \mu \in \mathbb{R},
$$

where v2(·, μ) is the formal solution in (6.4.9). Let σ<sup>τ</sup> be the function in the integral representation (6.4.12) of the Weyl function M<sup>τ</sup> . Then for every bounded open interval <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> whose endpoints are not eigenvalues of <sup>A</sup><sup>τ</sup> one has

$$(E\_\tau(\delta)f, f)\_{L^2\_r(a,b)} = \int\_\delta \widehat{f}(\mu) \, \overline{\widehat{f}(\mu)} \, d\sigma\_\tau(\mu).$$

Proof. Observe that for <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with compact support and λ ∈ ρ(A<sup>τ</sup> ) the resolvent of A<sup>τ</sup> can be written as

$$\begin{split} \left( (A\_{\tau} - \lambda)^{-1} f \right)(t) &= M\_{\tau}(\lambda) v\_2(t, \lambda) \int\_{a}^{b} v\_2(s, \lambda) f(s) r(s) \, ds \\ &+ v\_1(t, \lambda) \int\_{a}^{t} v\_2(s, \lambda) f(s) r(s) \, ds + v\_2(t, \lambda) \int\_{t}^{b} v\_1(s, \lambda) f(s) r(s) \, ds. \end{split} \tag{6.4.13}$$

Now let <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> be a bounded open interval whose endpoints are not eigenvalues of A<sup>τ</sup> . Then the spectral projection of A<sup>τ</sup> corresponding to δ is given by Stone's formula

$$\begin{aligned} & \left( (E\_{\tau}(\delta)f, f)\_{L^{2}\_{r}(a,b)} \right) \\ &= \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \left( \left( (A\_{\tau} - (\mu + i\varepsilon))^{-1} - (A\_{\tau} - (\mu - i\varepsilon))^{-1} \right) f, f \right)\_{L^{2}\_{r}(a,b)} d\mu. \end{aligned}$$

If <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, b) has compact support, say in [a- , b- ] ⊂ (a, b), then (6.4.13) and the fact that the function

$$\lambda \mapsto v\_1(t, \lambda) \int\_a^t v\_2(s, \lambda) f(s) r(s) \, ds + v\_2(t, \lambda) \int\_t^b v\_1(s, \lambda) f(s) r(s) \, ds$$

in (6.4.13) is entire imply that (E<sup>τ</sup> (δ)f,f)L<sup>2</sup> <sup>r</sup>(a,b) has the form

$$\begin{split} \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \left( \int\_{a}^{b} \int\_{a}^{b} \left[ g\_{t,s}(\mu + i\varepsilon) M\_{\tau}(\mu + i\varepsilon) \right. \right. \\\\ \left. \quad - g\_{t,s}(\mu - i\varepsilon) M\_{\tau}(\mu - i\varepsilon) \right] f(s) r(s) ds \, \overline{f(t)} r(t) dt \, \bigg) d\mu, \end{split} \tag{6.4.14}$$

where gt,s is defined by gt,s(η) = v2(t, η)v2(s, η). Note that for t, s ∈ [a- , b- ] the function gt,s is entire in η. For ε<sup>0</sup> > 0 and A<B such that δ ⊂ (A, B) consider the rectangle R = [A, B] × [−iε0, iε0]. The function {t, s, η} → gt,s(η) is bounded on [a- , b- ] × [a- , b- ] × R and hence for each fixed ε such that 0 < ε ≤ ε<sup>0</sup> it follows that

 δ <sup>b</sup> a <sup>b</sup> a gt,s(μ + iε)M<sup>τ</sup> (μ + iε) − gt,s(μ − iε)M<sup>τ</sup> (μ − iε) f(s)r(s)ds f(t)r(t)dt dμ = <sup>b</sup> a <sup>b</sup> a δ gt,s(μ + iε)M<sup>τ</sup> (μ + iε) − gt,s(μ − iε)M<sup>τ</sup> (μ − iε) dμ f(s)r(s)ds f(t)r(t)dt.

The Stieltjes inversion formula in Lemma A.2.7 shows that

$$\lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \left[ (g\_{t,s} M\_{\tau}) (\mu + i\varepsilon) - (g\_{t,s} M\_{\tau}) (\mu - i\varepsilon) \right] d\mu = \int\_{\delta} g\_{t,s} (\mu) \, d\sigma\_{\tau} (\mu)$$

for all t, s ∈ [a- , b- ]. To justify taking the limit ε ↓ 0 inside the integral (6.4.14) one needs dominated convergence. Recall from Lemma A.2.7 that there exists a constant m ≥ 0 such that for 0 < ε ≤ ε<sup>0</sup> one has

$$\begin{split} \left| \int\_{\delta} \left[ (g\_{t,s}M\_{\tau}) (\mu + i\varepsilon) - (g\_{t,s}M\_{\tau}) (\mu - i\varepsilon) \right] d\mu \right| \\ \leq m \sup \{ |g\_{t,s}(\eta)|, |g'\_{t,s}(\eta)| : t, s \in [a', b'], \ \eta \in R \}, \end{split} \tag{6.4.15}$$

where R = [A, B] × [−iε0, iε0]. Since {t, s, η} → gt,s(η) and {t, s, η} → g- t,s(η) are bounded functions on [a- , b- ] × [a- , b- ] × R, it follows that the integral in (6.4.15) regarded as a function in {t, s} on [a- , b- ]×[a- , b- ] is bounded by some constant for all <sup>0</sup> < ε <sup>≤</sup> <sup>ε</sup>0. As <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) has compact support, there is an integrable majorant for the integrands in (6.4.14). Dominated convergence and Fubini's theorem yield

$$\begin{aligned} (E\_\tau(\delta)f,f)\_{L^2\_r(a,b)} &= \int\_a^b \int\_a^b \left( \int\_\delta g\_{t,s}(\mu) \, d\sigma\_\tau(\mu) \right) f(s) r(s) ds \, \overline{f(t)} r(t) dt \\ &= \int\_\delta \left( \int\_a^b v\_2(s,\mu) f(s) r(s) \, ds \right) \left( \int\_a^b v\_2(t,\mu) \overline{f(t)} r(t) \, dt \right) d\sigma\_\tau(\mu) \end{aligned}$$

for every bounded open interval δ whose endpoints are not eigenvalues of A<sup>τ</sup> . Now the assertion follows from the definition of f . -

The next theorem is a consequence of Lemma 6.4.6 and Theorem B.1.4 in Appendix B.

**Theorem 6.4.7.** Let <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞}, let <sup>v</sup>2(·, μ) be the formal solution in (6.4.9), and let σ<sup>τ</sup> be the function in the integral representation of the Weyl function M<sup>τ</sup> . Then the Fourier transform

$$f \mapsto \widehat{f}, \qquad \widehat{f}(\mu) = \int\_a^b v\_2(s, \mu) f(s) r(s) \, ds, \quad \mu \in \mathbb{R},$$

extends by continuity from compactly supported functions <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) to a unitary mapping F : L<sup>2</sup> <sup>r</sup>(a, b) <sup>→</sup> <sup>L</sup><sup>2</sup> dσ<sup>τ</sup> (R), such that the self-adjoint operator A<sup>τ</sup> in L<sup>2</sup> <sup>r</sup>(a, b) is unitarily equivalent to multiplication by the independent variable in L<sup>2</sup> dσ<sup>τ</sup> (R).

Proof. It follows from Lemma 6.4.6 that the condition (B.1.2) is satisfied. It is also clear that for every <sup>μ</sup> <sup>∈</sup> <sup>R</sup> there exists <sup>s</sup> <sup>∈</sup> (a, b) such that <sup>v</sup>2(s, μ) = 0 and hence (B.1.13) holds. Now the result follows from Theorem B.1.4. -

In the next lemma the Fourier transform Fγ<sup>τ</sup> of the γ-field in (6.4.8) corresponding to the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } is computed; this will be useful in identifying the model in Theorem 6.4.7 with the model for scalar Nevanlinna functions discussed in Section 4.3.

**Lemma 6.4.8.** Let <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞} and let <sup>γ</sup><sup>τ</sup> be the <sup>γ</sup>-field in Lemma 6.4.5. Then for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one has almost everywhere in the sense of dσ<sup>τ</sup> :

$$\left[\mathcal{F}\gamma\_{\tau}(\lambda)\right](\mu) = \frac{1}{\mu - \lambda}, \qquad \mu \in \mathbb{R},$$

where F is the Fourier transform from L<sup>2</sup> <sup>r</sup>(a, b) onto L<sup>2</sup> dσ<sup>τ</sup> (R) in Theorem 6.4.7.

Proof. Let <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). From (6.4.10) one obtains for all t ∈ (a, b) that

$$\left( (A\_{\tau} - \lambda)^{-1} f \right)(t) = \left( G\_{\tau}(t, \cdot, \lambda), \overline{f} \right)\_{L^{2}\_{r}(a,b)},$$

where both terms are absolutely continuous in t ∈ (a, b). Differentiation yields

$$(p(t)\frac{d}{dt}\{(A\_\tau-\lambda)^{-1}f\}(t) = \left(p(t)\partial\_t G\_\tau(t,\cdot,\lambda),\overline{f}\right)\_{L^2\_r(a,b)},$$

where again both terms are absolutely continuous in t ∈ (a, b). Since the Fourier transform F is unitary, these two formulas lead to

$$\left( (A\_{\tau} - \lambda)^{-1} f \right)(t) = \left( \mathcal{F} G\_{\tau}(t, \cdot, \lambda), \mathcal{F} \overline{f} \right)\_{L^{2}\_{d \sigma\_{\tau}(\mathbb{R})}} \tag{6.4.16}$$

and

$$(p(t)\frac{d}{dt}\{(A\_\tau-\lambda)^{-1}f\}(t) = \left(\mathcal{F}p(t)\partial\_t G\_\tau(t,\cdot,\lambda), \mathcal{F}\overline{f}\right)\_{L^2\_{d\sigma\_\tau(\mathbb{K})}},\tag{6.4.17}$$

where all terms are absolutely continuous in t ∈ (a, b).

With <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) it follows from (B.1.8) that

$$\left( (A\_{\tau} - \lambda)^{-1} f \right)(t) = \int\_{\mathbb{R}} \frac{v\_2(t, \mu)}{\mu - \lambda} (\mathcal{F} f)(\mu) \, d\sigma\_{\tau}(\mu) \tag{6.4.18}$$

for almost all t ∈ (a, b) and that, in particular, the integrand on the right-hand side is integrable. In fact, if Ff has compact support, then the right-hand side of (6.4.18) is absolutely continuous in t ∈ (a, b) and hence in this case (6.4.18) holds for all t ∈ (a, b). Moreover, if Ff has compact support, one also has for all t ∈ (a, b)

$$(p(t)\frac{d}{dt}\left((A\_{\tau}-\lambda)^{-1}f\right)(t) = \int\_{\mathbb{R}} \frac{(pv\_2')(t,\mu)}{\mu-\lambda} \left(\mathcal{F}f\right)(\mu) \,d\sigma\_{\tau}(\mu),\tag{6.4.19}$$

where again all terms are absolutely continuous in t ∈ (a, b). Differentiation under the integral sign is now allowed as the integrand has compact support and the function μ → (pv- <sup>2</sup>)(t, μ) is bounded.

Comparison of the integrals on the right-hand sides of (6.4.16) and (6.4.18) under the assumption that Ff is an arbitrary function in L<sup>2</sup> dσ<sup>τ</sup> (R) with compact support leads for each t ∈ (a, b) to

$$\frac{\upsilon\_2(t,\mu)}{\mu-\lambda} = \left[\mathcal{F}G\_\tau(t,\cdot,\lambda)\right](\mu), \qquad \mu \in \mathbb{R}, \tag{6.4.20}$$

almost everywhere. Similarly, comparison of the integrals on the right-hand sides of (6.4.17) and (6.4.19) under the assumption that Ff is an arbitrary function in L2 dσ<sup>τ</sup> (R) with compact support leads for each <sup>t</sup> <sup>∈</sup> (a, b) to

$$\frac{(pv\_2')(t,\mu)}{\mu-\lambda} = \mathcal{F}[p(t)\partial\_t G\_\tau(t,\cdot,\lambda)](\mu), \qquad \mu \in \mathbb{R}, \tag{6.4.21}$$

almost everywhere. The union of the exceptional sets in (6.4.20) and (6.4.21) is denoted by Ω(t) and it has measure 0 in the sense of dσ<sup>τ</sup> .

Let <sup>t</sup> <sup>∈</sup> (a, b); then for all <sup>μ</sup> <sup>∈</sup> <sup>R</sup> \ Ω(t) it follows from the identities (6.4.20) and (6.4.21) that

$$\begin{split} &\frac{1}{\mu-\lambda} \left( v\_1(t,\lambda)(pv\_2')(t,\mu) - (pv\_1')(t,\lambda)v\_2(t,\mu) \right) \\ &= v\_1(t,\lambda) \mathcal{F}[p(t)\partial\_t G\_\tau(t,\cdot,\lambda)](\mu) - (pv\_1')(t,\lambda) \left[ \mathcal{F} G\_\tau(t,\cdot,\lambda) \right](\mu) \\ &= \mathcal{F} \left[ v\_1(t,\lambda)p(t)\partial\_t G\_\tau(t,\cdot,\lambda) - (pv\_1')(t,\lambda)G\_\tau(t,\cdot,\lambda) \right](\mu) \\ &= \mathcal{F} \left[ w(t,\cdot,\lambda) \right](\mu), \end{split} \tag{6.4.22}$$

where w(t, s, λ) is defined by

$$w(t,s,\lambda) = \begin{cases} M\_\tau(\lambda)v\_2(s,\lambda), & a < s < t, \\ \gamma\_\tau(s,\lambda), & t < s < b. \end{cases}$$

The form of w(t, s, λ) follows from (6.4.11), Lemma 6.4.5, and a straightforward computation. First observe that according to this definition

$$\left\|\left|\gamma\_{\tau}(\cdot,\lambda)-w(t,\cdot,\lambda)\right\|\right\|\_{L^{2}\_{r}(a,b)}^{2} = \int\_{a}^{t} |v\_{1}(s,\lambda)|^{2} \left|r(s)\right| ds \to 0 \quad \text{as} \quad t \to a,$$

and the continuity of F implies that

$$\left\| \left[ \mathcal{F} \middle| \gamma\_{\tau}(\cdot, \lambda) \right] - \mathcal{F} \middle| w(t, \cdot, \lambda) \right\| \right\|\_{L^{2}\_{d\sigma\_{\tau}}(\mathbb{R})} \to 0 \quad \text{as} \quad t \to a.$$

Now approximate the endpoint a by a sequence (tn). Then there exist a subsequence, again denoted by (tn), and a set Ω of measure 0 in the sense of dσ<sup>τ</sup> , such that pointwise

$$\mathcal{F}[\gamma\_\tau(\cdot,\lambda)](\mu) = \lim\_{n \to \infty} \mathcal{F}[w(t\_n,\cdot,\lambda)](\mu), \quad \mu \in \mathbb{R} \backslash \Omega.$$

Observe that <<sup>∞</sup> <sup>n</sup>=1 Ω(tn) is a set of measure 0 in the sense of dσ<sup>τ</sup> and that via (6.4.22)

$$\mathcal{F}[w(t\_n, \cdot, \lambda)](\mu) = \frac{1}{\mu - \lambda} \left( v\_1(t\_n, \lambda)(pv\_2')(t\_n, \mu) - (pv\_1')(t\_n, \lambda)v\_2(t\_n, \mu) \right)$$

for all <sup>μ</sup> <sup>∈</sup> <sup>R</sup> \ <<sup>∞</sup> <sup>n</sup>=1 Ω(tn). The limit on the right-hand side as n → ∞ gives

$$\frac{1}{\mu-\lambda} \left( v\_1(a,\lambda)(p v\_2')(a,\mu) - (p v\_1')(a,\lambda)v\_2(a,\mu) \right) = \frac{1}{\mu-\lambda},$$

which follows from the special form of the fundamental system (v1(·, λ); v2(·, λ)) in Lemma 6.4.5 and (6.4.1). Hence,

$$\mathcal{F}[\gamma\_\tau(\cdot,\lambda)](\mu) = \frac{1}{\mu-\lambda}, \quad \mu \in \mathbb{R} \mid \left(\Omega \cup \bigcup\_{n=1}^\infty \Omega(t\_n)\right),$$

which completes the proof. -

Lemma 6.4.8 will be used to identify the model in Theorem 6.4.7 with the model for scalar Nevanlinna functions discussed in Chapter 4.3. The Weyl function <sup>M</sup><sup>τ</sup> of the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } for Tmax has the integral representation (6.4.12). By Theorem 4.3.1, there is a closed simple symmetric operator S in L2 dσ<sup>τ</sup> (R) such that the Nevanlinna function M<sup>τ</sup> in (6.4.12) is the Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} for S<sup>∗</sup> in Theorem 4.3.1. The <sup>γ</sup>-field corresponding to {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} is denoted by γ and is given by (4.3.8). Furthermore, the self-adjoint restriction A- <sup>0</sup> corresponding to the boundary mapping Γ- <sup>0</sup> is the maximal multiplication operator by the independent variable in L<sup>2</sup> dσ<sup>τ</sup> (R). By comparing with (4.3.8) one sees that, according to Lemma 6.4.8, the Fourier transform F from L<sup>2</sup> <sup>r</sup>(a, b) onto L<sup>2</sup> dσ<sup>τ</sup> (R), being a unitary mapping, satisfies

$$
\mathcal{F}\gamma\_\tau(\lambda) = \gamma'(\lambda).
$$

$$\bot$$

Hence, by Theorem 4.2.6, the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } for the simple operator <sup>T</sup>min and the boundary triplet {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} for the simple operator S are unitarily equivalent under the Fourier transform F. Thus, not only are A- <sup>0</sup> and A<sup>τ</sup> unitarily equivalent under F,

$$A'\_0 = \mathcal{F} A\_\tau \mathcal{F}^{-1},$$

as stated in Theorem 6.4.7, but in fact the complete boundary triplet structure is preserved under the Fourier transform F.

Now assume that the coefficient functions satisfy (6.1.2) and that the endpoint a is in the limit-circle case, while the endpoint b is in the limit-point case. Let u, v be real solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, with <sup>W</sup>(u, v) = 1 with u, v <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, a- ). Let a fundamental system (u1(·, λ); u2(·, λ)) for the equation (L − λ)y = 0 be fixed by the initial conditions (6.2.11). The following proposition is proved along the same lines as Proposition 6.4.1.

**Proposition 6.4.9.** Assume that the endpoint a is in the limit-circle case and that the endpoint <sup>b</sup> is in the limit-point case. Then {C, <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0 f = f^{[0]}(a) \quad \text{and} \quad \Gamma\_1 f = f^{[1]}(a), \quad f \in \text{dom}\, T\_{\text{max}}\,,\tag{6.4.23}
$$

is a boundary triplet for the operator (Tmin )<sup>∗</sup> = Tmax . The self-adjoint extension A<sup>0</sup> corresponding to Γ<sup>0</sup> is the restriction of Tmax defined on

$$\text{dom}\,A\_0 = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f^{[0]}(a) = 0 \right\},$$

and the minimal operator Tmin is the restriction of Tmax defined on

$$\text{dom}\,T\_{\text{min}} = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, f^{[0]}(a) = f^{[1]}(a) = 0 \right\}.$$

Moreover, if <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>χ</sup>(·, λ) is a nontrivial element in <sup>N</sup>λ(Tmax ), then one has <sup>χ</sup>[0](a, λ) = 0. For all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the corresponding <sup>γ</sup>-field and Weyl function are given by

$$\gamma(\cdot,\lambda) = u\_1(\cdot,\lambda) + M(\lambda)u\_2(\cdot,\lambda) \quad \text{and} \quad M(\lambda) = \frac{\chi^{[1]}(a,\lambda)}{\chi^{[0]}(a,\lambda)}.$$

Proof. First it will be verified that the mapping (Γ0, <sup>Γ</sup>1) : dom <sup>T</sup>max <sup>→</sup> <sup>C</sup><sup>2</sup> is surjective. Let <sup>α</sup> <sup>∈</sup> <sup>C</sup>2; then there exists <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max such that <sup>f</sup>[0](a) = <sup>α</sup>1, f[1](a) = α2, and f vanishes in a neighborhood of b. To see this, define the function h on (a, b) by

$$h(x) = \alpha\_1 u(x) + \alpha\_2 v(x).$$

Then h, ph- <sup>∈</sup> AC(a, b) and <sup>h</sup> satisfies (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, while <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, a- ) by assumption. Now by cutting off the function h near b, one obtains a function <sup>f</sup> which satisfies (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>f</sup> <sup>=</sup> <sup>g</sup> for some <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and which vanishes in a neighborhood of b, see Proposition 6.1.3. Hence, f ∈ dom Tmax and at a one has

$$
\Gamma\_0 f = f^{[0]}(a) = h^{[0]}(a) = W\_a(h, v) = \alpha\_1,
$$

and

$$
\Gamma\_1 f = f^{[1]}(a) = h^{[1]}(a) = -W\_a(h, u) = \alpha\_2.
$$

This proves the claim.

Now the abstract Green identity will be proved. The argument is the same as in the proof of Proposition 6.4.1. It will be shown first that lim<sup>x</sup>→<sup>b</sup> Wx(f,g) = 0 for all f,g ∈ dom Tmax . In fact, since b is in the limit-point case, the minimal operator Tmin has defect numbers (1, 1) by Corollary 6.2.2. Now choose h1, h<sup>2</sup> ∈ dom Tmax such that

$$h\_1^{[0]}(a) = 1, \quad h\_1^{[1]}(a) = 0, \quad h\_2^{[0]}(a) = 0, \quad h\_2^{[1]}(a) = 1,$$

and such that h<sup>1</sup> and h<sup>2</sup> vanish in a neighborhood of b; cf. Proposition 6.1.3. Then h1, h<sup>2</sup> ∈ dom Tmin follows in the same way as in the proof of Proposition 6.4.1 from (6.2.7) and Lemma 6.2.5. Thus, every function f ∈ dom Tmax can be written in the form

f = f<sup>0</sup> + c1h<sup>1</sup> + c2h2, f<sup>0</sup> ∈ dom Tmin ,

for some <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>C</sup>. Therefore,

$$W\_x(f, \overline{g}) = W\_x(f\_0, \overline{g}) + W\_x(c\_1 h\_1 + c\_2 h\_2, \overline{g})$$

for all g ∈ dom Tmax and since the last term vanishes in a neighborhood of b, one obtains

$$\lim\_{x \to b} W(f, \overline{g}) = \lim\_{x \to b} W(f\_0, \overline{g}) = 0$$

for all g ∈ dom Tmax . Hence, it follows from (6.2.6) and Lemma 6.2.5 that for f,g ∈ dom Tmax one has

$$\begin{aligned} (T\_{\max}f,g)\_{L^2\_r(a,b)} - (f,T\_{\max}g)\_{L^2\_r(a,b)} &= -\lim\_{x \to a} W(f,\overline{g}) \\ &= f^{[1]}(a)\overline{g^{[0]}(a)} - f^{[0]}(a)\overline{g^{[1]}(a)}, \end{aligned}$$

which implies that the abstract Green identity is satisfied with the choice of Γ<sup>0</sup> and Γ<sup>1</sup> in (6.4.23). Thus, (6.4.23) defines a boundary triplet for (Tmin )<sup>∗</sup> = Tmax .

The forms of dom A0, dom Tmin , the γ-field, and the Weyl function are verified in the same way as in the proof of Proposition 6.4.1. -

## **6.5 The case of two limit-point endpoints and interface conditions**

Assume that the endpoints a and b of the interval (a, b) are both singular and that the differential expression L is in the limit-point case at a and at b. In this section interface conditions at an interior point c ∈ (a, b) are discussed and the maximal operator Tmax associated with L in L<sup>2</sup> <sup>r</sup>(a, b) is identified as a natural extension of the coupling of the minimal operators on the subintervals (a, c) and (c, b). It turns out, in particular, that Tmax is self-adjoint in L<sup>2</sup> <sup>r</sup>(a, b) and hence Tmin = Tmax, so that the defect numbers of Tmin are (0, 0); cf. Corollary 6.2.2.

Let c ∈ (a, b) and consider the intervals (a, c) and (c, b) separately. The differential expression L will be restricted to the open intervals (a, c) and (c, b), so that the endpoint c is regular for L and the endpoints a and b are in the limit-point case. At the point c fix a fundamental system (u1(·, λ); u2(·, λ)) for the equation (L − λ)y = 0 by the conditions

$$
\begin{pmatrix} u\_1(c,\lambda) & u\_2(c,\lambda) \\ (pu\_1')(c,\lambda) & (pu\_2')(c,\lambda) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} . \tag{6.5.1}
$$

In the following the functions f on (a, b) will often be written in the vector form

$$\begin{pmatrix} f\_+ \\ f\_- \end{pmatrix}, \quad \text{where} \quad f\_+ = f \restriction\_{\{c,b\}} \quad \text{and} \quad f\_- = f \restriction\_{\{a,c\}};$$

here the indices + and − stand for the restriction of a function on (a, b) to the subintervals (c, b) and (a, c), respectively.

Let T <sup>+</sup> max be the maximal operator generated by L on (c, b) and define

$$
\Gamma\_0^+ f\_+ = f\_+(c) \quad \text{and} \quad \Gamma\_1^+ f\_+ = (p\_+ f\_+^{\prime})(c), \quad f\_+ \in \text{dom} \, T\_{\text{max}}^+.
$$

According to Proposition 6.4.1, {C, <sup>Γ</sup><sup>+</sup> <sup>0</sup> , Γ<sup>+</sup> <sup>1</sup> } is a boundary triplet for <sup>T</sup> <sup>+</sup> max with Weyl function m+, so that on (c, b)

$$
\gamma\_+(\cdot,\lambda) = u\_1(\cdot,\lambda) + m\_+(\lambda)u\_2(\cdot,\lambda) \in L^2\_r(c,b),
$$

where <sup>u</sup>1(·, λ) and <sup>u</sup>2(·, λ) are as in (6.5.1). Note that the operator <sup>A</sup><sup>+</sup> <sup>0</sup> with domain

$$\dim A\_0^+ = \ker \Gamma\_0^+ = \left\{ f\_+ \in \text{dom}\, T\_{\text{max}}^+ \, : \, f\_+(c) = 0 \right\}.$$

is a self-adjoint extension of the minimal operator T <sup>+</sup> min in L<sup>2</sup> <sup>r</sup><sup>+</sup> (c, b) defined on

$$\text{dom}\,T^+\_{\text{min}} = \left\{ f\_+ \in \text{dom}\,T^+\_{\text{max}} \,:\, f\_+(c) = (pf'\_+)(c) = 0 \right\}.$$

Likewise, let T <sup>−</sup> max be the maximal operator generated by L on (a, c) and define

$$
\Gamma\_0^- f\_- = f\_-(c) \quad \text{and} \quad \Gamma\_1^- f\_- = -(p\_- f\_-')(c), \quad f\_- \in \text{dom} \, T\_{\text{max}}^- .
$$

Then {C, <sup>Γ</sup><sup>−</sup> <sup>0</sup> , Γ<sup>−</sup> <sup>1</sup> } is a boundary triplet for T <sup>−</sup> max with Weyl function m−, so that on (a, c)

$$
\gamma\_- (\cdot, \lambda) = u\_1 (\cdot, \lambda) - m\_- (\lambda) u\_2 (\cdot, \lambda) \in L^2\_r (a, c),
$$

where again u1(·, λ) and u2(·, λ) are as in (6.5.1). Note that the operator A<sup>−</sup> <sup>0</sup> with domain

$$\dim A\_0^- = \ker \Gamma\_0^- = \{ f\_- \in \text{dom}\, T\_{\text{max}}^- : f\_-(c) = 0 \}.$$

is a self-adjoint extension of the minimal operator T <sup>−</sup> min in L<sup>2</sup> <sup>r</sup>(a, c) defined on

$$\text{dom}\,T\_{\text{min}}^{-}=\left\{f\_{-}\in\text{dom}\,T\_{\text{max}}^{-}:f\_{-}(c)=(pf\_{-}^{\prime})(c)=0\right\}.$$

The two maximal operators together give rise to the orthogonal coupling

$$T\_{\text{max}}^{-} \stackrel{\frown}{\oplus} T\_{\text{max}}^{+} \quad \text{in} \quad L\_r^2(a, b) = L\_r^2(a, c) \oplus L\_r^2(c, b).$$

It is clear from Proposition 1.3.13 that T <sup>−</sup> max <sup>⊕</sup> <sup>T</sup> <sup>+</sup> max is the adjoint of the orthogonal coupling of the corresponding minimal operators T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min , and that T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min is a restriction of T <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max defined by the conditions

$$f\_+(c) = 0 = f\_-(c) \quad \text{and} \quad (p\_+ f\_+')(c) = 0 = -(p\_- f\_-')(c)$$

on the functions <sup>f</sup> <sup>∈</sup> dom (<sup>T</sup> <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max ). In particular, these conditions force a smooth connection at c. Note that T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min is a densely defined closed symmetric operator with defect numbers (2, 2) in L<sup>2</sup> <sup>r</sup>(a, b) = L<sup>2</sup> <sup>r</sup>(a, c) <sup>⊕</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(c, b), which is simple since both operators T <sup>+</sup> min and T <sup>−</sup> min are simple by Proposition 6.4.4. The orthogonal coupling T <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max can also be identified with an operator associated with L when it is restricted to the domain

$$\{ f \in L\_r^2(a, b) : f, pf' \in AC((a, b) \backslash \{c\}), \ L f \in L\_r^2(a, b) \};$$

in other words, for the elements in this domain both f and pf are allowed to have one-sided limits at c which need not be equal. Similarly, the orthogonal coupling T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min can be identified as a restriction of the self-adjoint operator Tmax in L2 <sup>r</sup>(a, b) defined by the interface conditions

$$f(c) = (pf')(c) = 0.$$

The following result is a direct consequence of the orthogonal coupling of boundary triplets; see Section 4.6.

**Proposition 6.5.1.** A boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} for <sup>T</sup> <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max is given by

$$
\tilde{\Gamma}\_0 f = \begin{pmatrix} f\_+(c) \\ f\_-(c) \end{pmatrix} \quad \text{and} \quad \tilde{\Gamma}\_1 f = \begin{pmatrix} (p\_+ f\_+')(c) \\ -(p\_- f\_-')(c) \end{pmatrix}, \tag{6.5.2}
$$

where <sup>f</sup> = (f+, f−) <sup>∈</sup> dom (<sup>T</sup> <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max ). The self-adjoint extension <sup>A</sup><sup>0</sup> corresponding to <sup>Γ</sup><sup>0</sup> is the orthogonal coupling of the operators <sup>A</sup><sup>+</sup> <sup>0</sup> and A<sup>−</sup> <sup>0</sup> with domain

$$\text{dom}\,\tilde{A}\_0 = \left\{ f = \begin{pmatrix} f\_+ \\ f\_- \end{pmatrix} \in \text{dom}\left( T^+\_{\text{max}} \stackrel{\frown}{\oplus} T^-\_{\text{max}} \right) : f\_+(c) = 0 = f\_-(c) \right\}$$

and the orthogonal coupling of the minimal operators T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min is the restriction of T <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max defined on

$$\begin{aligned} & \text{dom}\left(T\_{\text{min}}^{+} \stackrel{\widehat{\oplus}}{\oplus} T\_{\text{min}}^{-}\right) \\ &= \left\{ f = \begin{pmatrix} f\_{+} \\ f\_{-} \end{pmatrix} \in \text{dom}\left(T\_{\text{max}}^{+} \stackrel{\widehat{\oplus}}{\oplus} T\_{\text{max}}^{-}\right) : \begin{aligned} f\_{+}(c) &= 0 = f\_{-}(c), \\ (pf\_{+}')(c) &= 0 = (pf\_{-}')(c) \end{aligned} \right\} . \end{aligned}$$

Moreover, for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A<sup>0</sup>) = <sup>ρ</sup>(A<sup>+</sup> <sup>0</sup> )∩ρ(A<sup>−</sup> <sup>0</sup> ) the corresponding <sup>γ</sup>-field <sup>γ</sup> and Weyl function M <sup>5</sup> are given by

$$
\widetilde{\gamma}(\lambda) = \begin{pmatrix} \gamma\_+(\lambda) & 0 \\ 0 & \gamma\_-(\lambda) \end{pmatrix} \quad \text{and} \quad \widetilde{M}(\lambda) = \begin{pmatrix} m\_+(\lambda) & 0 \\ 0 & m\_-(\lambda) \end{pmatrix}.
$$

Note that the self-adjoint operator <sup>A</sup><sup>0</sup> <sup>=</sup> <sup>A</sup><sup>+</sup> <sup>0</sup> <sup>⊕</sup> <sup>A</sup><sup>−</sup> <sup>0</sup> is the orthogonal coupling of the self-adjoint realizations of L on (a, c) and (c, b) corresponding to Dirichlet boundary conditions at c. The resolvents of A<sup>+</sup> <sup>0</sup> and A<sup>−</sup> <sup>0</sup> admit an integral representation as in Proposition 6.4.3 which extends in a natural form to the orthogonal coupling <sup>A</sup>0. Since the orthogonal coupling <sup>T</sup> <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min of the minimal operators is a densely defined closed simple symmetric operator with defect numbers (2, 2), the spectrum of <sup>A</sup><sup>0</sup> can be described with the help of the 2 <sup>×</sup> 2-matrix function M <sup>5</sup> in Proposition 6.5.1 and the general results in Section 3.5 and Section 3.6; cf. the considerations below Proposition 6.4.4.

Recall for completeness that all self-adjoint extensions <sup>A</sup><sup>Θ</sup> of the orthogonal coupling T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min in L<sup>2</sup> <sup>r</sup>(a, b) = L<sup>2</sup> r(a, c)⊕L<sup>2</sup> r(c, b) are in one-to-one correspondence to the self-adjoint relations Θ in C<sup>2</sup> via

$$\begin{split} \operatorname{dom}\tilde{A}\_{\Theta} &= \left\{ f \in \operatorname{dom}\left( T^{+}\_{\max} \, \widehat{\oplus} \, T^{-}\_{\max} \right) : \left\{ \widetilde{\Gamma}\_{0} f, \widetilde{\Gamma}\_{1} f \right\} \in \Theta \right\} \\ &= \left\{ f \in \operatorname{dom}\left( T^{+}\_{\max} \, \widehat{\oplus} \, T^{-}\_{\max} \right) : \left\{ \begin{pmatrix} f+(c) \\ f-(c) \end{pmatrix}, \begin{pmatrix} (p\_{+} f'\_{+})(c) \\ -(p\_{-} f'\_{-})(c) \end{pmatrix} \right\} \in \Theta \right\}. \end{split}$$

For <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AΘ) <sup>∩</sup> <sup>ρ</sup>(A0) Kre˘ın's formula in the present setting has the form

$$\left(\tilde{A}\_{\Theta} - \lambda\right)^{-1} = \left(\tilde{A}\_0 - \lambda\right)^{-1} + \tilde{\gamma}(\lambda)\left(\Theta - \widetilde{M}(\lambda)\right)^{-1}\tilde{\gamma}(\tilde{\lambda})^\*.\tag{6.5.3}$$

In the same way as in Section 6.3 and Section 6.4, the spectral properties of the self-adjoint realizations <sup>A</sup><sup>Θ</sup> can be described in a convenient way with the help of transforms of the Weyl function M <sup>5</sup>; cf. Section 3.8.

Among all self-adjoint extensions of T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min there is one of particular importance, namely the extension corresponding to the self-adjoint relation

$$\tilde{\Theta} = \text{span}\begin{pmatrix} 1 \\ 1 \end{pmatrix} \times \text{span}\begin{pmatrix} 1 \\ -1 \end{pmatrix} = \left\{ \left\{ \begin{pmatrix} \varphi \\ \varphi \end{pmatrix}, \begin{pmatrix} \psi \\ -\psi \end{pmatrix} \right\} : \varphi, \psi \in \mathbb{C} \right\},\tag{6.5.4}$$

which was also considered in the abstract context in Section 4.6; cf. Proposition 4.6.1.

**Corollary 6.5.2.** Let {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>T</sup> <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max as defined in (6.5.2) and let the self-adjoint relation <sup>Θ</sup> be as in (6.5.4). Then the corresponding self-adjoint extension <sup>A</sup> <sup>Θ</sup> of <sup>T</sup> <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min satisfies

$$
\tilde{A}\_{\tilde{\Theta}} = T\_{\text{max}}\,,\tag{6.5.5}
$$

and for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> Kre˘ın's formula in (6.5.3) reads

$$\left(T\_{\text{max}} - \lambda\right)^{-1} = \left(\tilde{A}\_0 - \lambda\right)^{-1} - \frac{1}{m\_+(\lambda) + m\_-(\lambda)} \tilde{\gamma}(\lambda) \begin{pmatrix} 1 & 1\\ 1 & 1 \end{pmatrix} \tilde{\gamma}(\tilde{\lambda})^\*.\tag{6.5.6}$$

Proof. By definition, the self-adjoint extension <sup>A</sup> <sup>Θ</sup> of <sup>T</sup> <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min is determined by the boundary condition

$$\{\widetilde{\Gamma}\_0 f, \widetilde{\Gamma}\_1 f\} \in \widetilde{\Theta}, \quad f = \begin{pmatrix} f\_+ \\ f\_- \end{pmatrix} \in \text{dom}\left(T^+\_{\text{max}} \oplus T^-\_{\text{max}}\right),$$

which, by the definition of the boundary mappings in Proposition 6.5.1 and (6.5.4), leads to

$$\text{dom}\,\tilde{A}\_{\tilde{\Theta}} = \left\{ f = \begin{pmatrix} f\_{+} \\ f\_{-} \end{pmatrix} \in \text{dom}\left( T\_{\text{max}}^{+} \oplus T\_{\text{max}}^{-} \right) : \begin{matrix} f\_{+}(c) = f\_{-}(c) \\ (p\_{+} f\_{+}^{\prime})(c) = (p\_{-} f\_{-}^{\prime})(c) \end{matrix} \right\}.$$

Observe that this domain coincides with dom Tmax and (6.5.5) follows. Kre˘ın's formula in (6.5.6) follows from (6.5.3) and

$$\left(\widetilde{\Theta} - \widetilde{M}(\lambda)\right)^{-1} = -\frac{1}{m\_+(\lambda) + m\_-(\lambda)} \begin{pmatrix} 1 & 1\\ 1 & 1 \end{pmatrix};$$

cf. Proposition 4.6.1. -

Since Tmax is self-adjoint, it follows from Corollary 6.5.2 that the defect numbers of Tmin are (0, 0). This fact can be seen as a completion of the statements in Corollary 6.2.2.

The boundary triplet in (6.5.2) will now be transformed in order to interpret the self-adjoint extension Tmax in (6.5.5) in a convenient way. The following proposition is a variant of Proposition 4.6.4 in the present situation.

**Proposition 6.5.3.** A boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} for <sup>T</sup> <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max is given by

$$
\widehat{\Gamma}\_0 f = \begin{pmatrix} -(p\_+ f\_+')(c) + (p\_- f\_-')(c) \\ f\_+(c) - f\_-(c) \end{pmatrix} \quad \text{and} \quad \widehat{\Gamma}\_1 f = \begin{pmatrix} f\_+(c) \\ (p\_- f\_-')(c) \end{pmatrix},
$$

where <sup>f</sup> = (f+, f−) <sup>∈</sup> dom (<sup>T</sup> <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max ). Here the self-adjoint operator defined on ker <sup>Γ</sup><sup>0</sup> coincides with the maximal operator <sup>T</sup>max associated with <sup>L</sup> in L2 <sup>r</sup>(a, b). For all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the Weyl function corresponding to the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} is given by

$$
\widehat{M}(\lambda) = \begin{pmatrix} 1 \\ -\frac{1}{m\_+(\lambda) + m\_-(\lambda)} & \frac{m\_-(\lambda)}{m\_+(\lambda) + m\_-(\lambda)} \\ \frac{m\_-(\lambda)}{m\_+(\lambda) + m\_-(\lambda)} & \frac{m\_-(\lambda)m\_+(\lambda)}{m\_+(\lambda) + m\_-(\lambda)} \end{pmatrix}. \tag{6.5.7}
$$

Assume that <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A<sup>0</sup>) = <sup>ρ</sup>(A<sup>+</sup> <sup>0</sup> ) ∩ ρ(A<sup>−</sup> <sup>0</sup> ). Then the functions m<sup>+</sup> and m<sup>−</sup> are defined and analytic at λ. It is not difficult to check that λ ∈ σp(Tmax ) if and only if m+(λ) + m−(λ) = 0; this also follows from Theorem 2.6.2, the special form of M <sup>5</sup> in Proposition 6.5.1, and the choice of Θ. Since the resolvents of <sup>T</sup>max and the orthogonal coupling <sup>A</sup><sup>0</sup> differ by a rank-one operator, it is also clear that a point <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A<sup>+</sup> <sup>0</sup> ) ∩ ρ(A<sup>−</sup> <sup>0</sup> ) is either an isolated eigenvalue of <sup>A</sup>, or belongs to ρ(Tmax ). Hence, the expression for the Weyl function M in (6.5.7) remains valid for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A<sup>+</sup> <sup>0</sup> )∩ρ(A<sup>−</sup> <sup>0</sup> )∩ρ(Tmax ). Note also that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the Weyl function M has the simple representation

$$
\widehat{M}(\lambda) = -\begin{pmatrix} m\_+(\lambda) & -1 \\ -1 & -\frac{1}{m\_-(\lambda)} \end{pmatrix}^{-1}.
$$

This representation remains valid for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A<sup>+</sup> <sup>0</sup> ) ∩ ρ(A<sup>−</sup> <sup>1</sup> ) <sup>∩</sup> <sup>ρ</sup>(A), where <sup>A</sup><sup>−</sup> 1 denotes the self-adjoint operator in L<sup>2</sup> r(a, c) defined on ker Γ<sup>−</sup> 1 .

As the orthogonal coupling T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min is a simple symmetric operator with defect numbers (2, 2), the spectral properties of the self-adjoint extension Tmax can be described by means of the Weyl function M in Proposition 6.5.3 and the general results in Section 3.5 and Section 3.6. First of all, it is clear that the poles of M coincide with the isolated eigenvalues of <sup>T</sup>max and hence it follows from the representation (6.5.7) that λ ∈ σp(Tmax ) is an isolated eigenvalue if and only if m<sup>+</sup> and m<sup>−</sup> are holomorphic at λ and m+(λ) + m−(λ) = 0, or both m<sup>+</sup> and m<sup>−</sup> have a pole at λ. Note that M is holomorphic at <sup>λ</sup> if <sup>m</sup><sup>∓</sup> has a pole and <sup>m</sup><sup>±</sup> is holomorphic at λ. For the description of the eigenvalues of Tmax embedded in the continuous spectrum recall from Corollary 3.5.6 (see also Theorem 3.6.1) that <sup>λ</sup> <sup>∈</sup> <sup>σ</sup>p(Tmax ) if and only if <sup>R</sup>λ<sup>ϕ</sup> = lim<sup>ε</sup> <sup>↓</sup> <sup>0</sup> iεM (<sup>λ</sup> <sup>+</sup> iε)<sup>ϕ</sup> = 0 for some <sup>ϕ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> and that the linear map

$$\widehat{\tau}: \ker \left( T\_{\max} - \lambda \right) \to \operatorname{ran} \widehat{\mathcal{R}}\_{\lambda}, \quad f(\cdot, \lambda) \mapsto \widehat{\Gamma}\_1 f(\cdot, \lambda) = \begin{pmatrix} f\_+(c, \lambda) \\ (p\_- f\_-')(c, \lambda) \end{pmatrix},$$

is bijective. The continuous, absolutely continuous, and singular continuous spectra are described as in Theorem 3.6.5 and Theorem 3.6.8.

**Proposition 6.5.4.** For <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the resolvent of the self-adjoint extension <sup>T</sup>max is an integral operator of the form

$$\left( (T\_{\text{max}} - \lambda)^{-1} g \right)(t) = \int\_{a}^{b} G(t, s, \lambda) g(s) r(s) \, ds, \quad g \in L\_r^2(a, b), \tag{6.5.8}$$

where the Green function G(t, s, λ) is given by

$$\begin{cases} \frac{-1}{m\_{+}(\lambda)+m\_{-}(\lambda)}(u\_{1}(t,\lambda)+m\_{+}(\lambda)u\_{2}(t,\lambda))(u\_{1}(s,\lambda)-m\_{-}(\lambda)u\_{2}(s,\lambda)), & a$$

In particular, if <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) has compact support, then

$$\begin{aligned} \left( (T\_{\max} - \lambda)^{-1} g \right)(t) &= \left( u\_1(t, \lambda) \mid u\_2(t, \lambda) \right) \, \widehat{M}(\lambda) \left( \begin{aligned} \int\_a^b u\_1(s, \lambda) g(s) r(s) \, ds \\ \int\_a^b u\_2(s, \lambda) g(s) r(s) \, ds \end{aligned} \right) \\ &- u\_1(t, \lambda) \int\_t^b u\_2(s, \lambda) g(s) r(s) \, ds - u\_2(t, \lambda) \int\_a^t u\_1(s, \lambda) g(s) r(s) \, ds, \end{aligned}$$

where the 2 × 2 matrix M (λ) is given by (6.5.7).

Proof. Observe that for <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the function <sup>f</sup>(·, λ) given by

$$\begin{aligned} &-\left(m\_{+}(\lambda)+m\_{-}(\lambda)\right)f(t,\lambda) \\ &=\left(u\_{1}(t,\lambda)+m\_{+}(\lambda)u\_{2}(t,\lambda)\right)\int\_{a}^{t}\left(u\_{1}(s,\lambda)-m\_{-}(\lambda)u\_{2}(s,\lambda)\right)g(s)r(s)\,ds \\ &+\left(u\_{1}(t,\lambda)-m\_{-}(\lambda)u\_{2}(t,\lambda)\right)\int\_{t}^{b}\left(u\_{1}(s,\lambda)+m\_{+}(\lambda)u\_{2}(s,\lambda)\right)g(s)r(s)\,ds \end{aligned}$$

is well defined. Moreover, it satisfies (L − λ)f = g and it has the following initial values at c:

$$f(c, \lambda) = -\frac{(g\_-, \gamma\_-(\lambda))\_{L^2\_r(a,c)} + (g\_+, \gamma\_+(\lambda))\_{L^2\_r(c,b)}}{m\_+(\lambda) + m\_-(\lambda)}$$

and

$$(pf')(c, \lambda) = -\frac{m\_+(\lambda)(g\_-, \gamma\_-(\lambda))\_{L^2\_r(a,c)} - m\_-(\lambda)(g\_+, \gamma\_+(\lambda))\_{L^2\_r(c,b)}}{m\_+(\lambda) + m\_-(\lambda)}.$$

Recall that the function <sup>h</sup> = (Tmax <sup>−</sup> <sup>λ</sup>)−1<sup>g</sup> is given by the Kre˘ın formula in (6.5.6). This leads to the following expressions for its components

$$
\begin{split}
\begin{pmatrix} h\_{+}(\cdot,\lambda) \\ h\_{-}(\cdot,\lambda) \end{pmatrix} &= \begin{pmatrix} (A\_{0}^{+}-\lambda)^{-1}g\_{+} \\ (A\_{0}^{-}-\lambda)^{-1}g\_{-} \end{pmatrix} \\ &- \frac{1}{m\_{+}(\lambda)+m\_{-}(\lambda)} \begin{pmatrix} \gamma\_{+}(\lambda) & 0 \\ 0 & \gamma\_{-}(\lambda) \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} (g\_{+},\gamma\_{+}(\overline{\lambda}))\_{L^{2}\_{r}(c,b)} \\ (g\_{-},\gamma\_{-}(\overline{\lambda}))\_{L^{2}\_{r}(a,c)} \end{pmatrix} \\ &= \begin{pmatrix} (A\_{0}^{+}-\lambda)^{-1}g\_{+} \\ (A\_{0}^{-}-\lambda)^{-1}g\_{-} \end{pmatrix} \\ &- \frac{1}{m\_{+}(\lambda)+m\_{-}(\lambda)} \begin{pmatrix} \gamma\_{+}(\lambda) \left[ (g\_{+},\gamma\_{+}(\overline{\lambda}))\_{L^{2}\_{r}(c,b)} + (g\_{-},\gamma\_{-}(\overline{\lambda}))\_{L^{2}\_{r}(a,c)} \right] \\ \gamma\_{-}(\lambda) \left[ (g\_{+},\gamma\_{+}(\overline{\lambda}))\_{L^{2}\_{r}(c,b)} + (g\_{-},\gamma\_{-}(\overline{\lambda}))\_{L^{2}\_{r}(c,c)} \right] \end{pmatrix}.
\end{split}
$$

To compute h(c, λ) and (ph- )(c, λ), it suffices to compute h+(c, λ) and (p+h- <sup>+</sup>)(c, λ) since <sup>h</sup> <sup>∈</sup> dom <sup>T</sup>max is smooth at <sup>c</sup>. Let <sup>k</sup>+(·, λ)=(A<sup>+</sup> <sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1g+, then clearly

$$k\_+(c, \lambda) = 0 \quad \text{and} \quad (p\_+ k\_+^{\prime})(c, \lambda) = (g\_+, \gamma\_+(\vec{\lambda}))\_{L^2\_r(c,b)},$$

using Proposition 2.3.2 (iv). Furthermore, observe that

$$
\gamma\_+(c,\lambda) = 1 \quad \text{and} \quad (p\_+\gamma\_+')(c,\lambda) = m\_+(\lambda).
$$

Hence, one sees that

$$h(c, \lambda) = h\_+(c, \lambda) = -\frac{(g\_+, \gamma\_+(\overline{\lambda}))\_{L^2\_r(c,b)} + (g\_-, \gamma\_-(\overline{\lambda}))\_{L^2\_r(a,c)}}{m\_+(\lambda) + m\_-(\lambda)}$$

and that

$$\begin{split} (ph')(c,\lambda) &= (ph'\_+)(c,\lambda) \\ &= (g\_+,\gamma\_+(\overline{\lambda}))\_{L^2\_r(c,b)} - \frac{m\_+(\lambda)\left[ (g\_+,\gamma\_+(\overline{\lambda}))\_{L^2\_r(c,b)} + (g\_-,\gamma\_-(\overline{\lambda}))\_{L^2\_r(a,c)} \right]}{m\_+(\lambda) + m\_-(\lambda)} \\ &= \frac{m\_-(\lambda)(g\_+,\gamma\_+(\overline{\lambda}))\_{L^2\_r(c,b)} - m\_+(\lambda)(g\_-,\gamma\_-(\overline{\lambda}))\_{L^2\_r(a,c)}}{m\_+(\lambda) + m\_-(\lambda)}. \end{split}$$

Therefore, f(·, λ) and h(·, λ) satisfy the same differential equation and the same initial conditions. It follows that <sup>f</sup>(·, λ)=(Tmax <sup>−</sup> <sup>λ</sup>)−1<sup>g</sup> and this yields (6.5.8).

Now assume that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) has compact support. Then writing out all the products on the right-hand side of (Tmax <sup>−</sup> <sup>λ</sup>)−1<sup>g</sup> is allowed, since each individual integral is well defined. This rewriting of the terms of the function f(·, λ) gives eight terms, which after adding and substracting of the terms

$$m\_{-}(\lambda)u\_{1}(t,\lambda)\int\_{t}^{b}u\_{2}(s,\lambda)g(s)r(s)\,ds$$

and

$$m\_{-}(\lambda)u\_{2}(t,\lambda)\int\_{a}^{t}u\_{1}(s,\lambda)g(s)r(s)\,ds$$

and regrouping leads to the desired result. -

Let <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) have compact support and define the two-dimensional Fourier transform

$$
\widehat{f}(\mu) = \begin{pmatrix} \int\_a^b u\_1(t, \mu) f(t) \, dt \\ \int\_a^b u\_2(t, \mu) f(t) \, dt \end{pmatrix}. \tag{6.5.9}
$$

Consider the maximal operator Tmax as the smooth extension of T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min and let E(λ) be the corresponding spectral family. Then it follows from Proposition 6.5.4 in the same way as in the proof of Lemma 6.4.6 that

$$(E(\Delta)f,f)\_{L^2\_r(a,b)} = \int\_{\Delta} \widehat{f}(x)^\* d\Sigma(x)\widehat{f}(x),$$

where Σ denotes the 2 × 2-matrix function in the integral representation of the Weyl function M in Proposition 6.5.3; cf. Theorem A.4.2. In the present context there is an analog of Theorem 6.4.7.

**Theorem 6.5.5.** Let Σ be the 2 × 2-matrix function in the integral representation of the Weyl function M in (6.5.7). Then the map <sup>f</sup> → <sup>f</sup> in (6.5.9) extends by continuity from compactly supported functions <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) to a unitary mapping F : L<sup>2</sup> <sup>r</sup>(a, b) <sup>→</sup> <sup>L</sup><sup>2</sup> <sup>d</sup>Σ(R), such that the self-adjoint operator <sup>T</sup>max in <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) is unitarily equivalent to multiplication by the independent variable in L<sup>2</sup> <sup>d</sup>Σ(R).

The Fourier transform in the theorem is vector-valued. The details of the proof follow the scalar case, but this line of thought will not be pursued in the text.

**Remark 6.5.6.** The coupling technique in this section can also be applied in situations when the endpoints a or b are regular or in the limit-circle case. For simplicity a possible choice of the boundary triplets and Weyl functions will be made explicit when L is regular at a and b, and Dirichlet boundary conditions are imposed there. Let c ∈ (a, b), consider the intervals (a, c) and (c, b) separately, and use the fundamental system (u1(·, λ); u2(·, λ)) in (6.5.1). As in Corollary 6.3.2 choose the operator (T <sup>+</sup> min ) in L<sup>2</sup> <sup>r</sup><sup>+</sup> (c, b) defined on

$$\text{dom}\,(T^+\_{\text{min}})' = \left\{ f \in \text{dom}\,T^+\_{\text{max}} : f\_+(c) = (p\_+f'\_+)(c) = f\_+(b) = 0 \right\}$$

and the boundary triplet {C, <sup>Γ</sup><sup>+</sup> <sup>0</sup> , Γ<sup>+</sup> <sup>1</sup> } for the adjoint given by

$$
\Gamma\_0^+ f\_+ = f\_+(c) \quad \text{and} \quad \Gamma\_1^+ f\_+ = (p\_+ f\_+^{\prime})(c), \quad f\_+ \in \text{dom}\left( (T\_{\text{min}}^+)^{\prime} \right)^\*,
$$

with corresponding Weyl function

$$m\_{+}(\lambda) = -\frac{u\_1(b,\lambda)}{u\_2(b,\lambda)}.\tag{6.5.10}$$

Likewise, choose the operator (T <sup>−</sup> min ) in L<sup>2</sup> <sup>r</sup><sup>−</sup> (a, c) defined on

$$\text{dom}\,(T\_{\text{min}}^{-})' = \left\{ f \in \text{dom}\,T\_{\text{max}}^{-} : f\_{-}(c) = (p\_{-}f\_{-}^{\prime})(c) = f\_{-}(b) = 0 \right\}$$

and the boundary triplet {C, <sup>Γ</sup><sup>−</sup> <sup>0</sup> , Γ<sup>−</sup> <sup>1</sup> } for the adjoint given by

$$
\Gamma\_0^- f\_- = f\_-(c) \quad \text{and} \quad \Gamma\_1^- f\_- = -(p\_- f\_-')(c), \quad f\_- \in \text{dom}\left( (T\_{\text{min}}^-)' \right)^\*,
$$

with corresponding Weyl function

$$m\_{-}(\lambda) = \frac{u\_1(a,\lambda)}{u\_2(a,\lambda)}.\tag{6.5.11}$$

The earlier considerations in this section remain valid with the Weyl functions m<sup>+</sup> and m<sup>−</sup> in (6.5.10) and (6.5.11), respectively.

## **6.6 Exit space extensions**

In order to study boundary value problems where the spectral parameter appears in the boundary conditions one has to deal with self-adjoint extensions of a closed symmetric Sturm–Liouville operator in an exit space extending the original Hilbert space. In this section the exit space extensions will be investigated for a Sturm– Liouville expression L, which is regular at one endpoint and in the limit-point case at the other endpoint. Other situations may be studied in an analogous fashion.

At this stage observe that the construction in Section 6.5 may also be interpreted in the following way. The operator T <sup>+</sup> min in L<sup>2</sup> <sup>r</sup>(c, b) has a self-adjoint extension Tmax in the Hilbert space L<sup>2</sup> <sup>r</sup>(a, b) when L<sup>2</sup> <sup>r</sup>(a, c) is considered as an exit space: L<sup>2</sup> <sup>r</sup>(a, b) = L<sup>2</sup> <sup>r</sup>(a, c) <sup>⊕</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(c, b). It follows from (6.5.6) in Corollary 6.5.2 that the compression of the resolvent of Tmax from L<sup>2</sup> <sup>r</sup>(a, b) onto L<sup>2</sup> <sup>r</sup>(c, b) is of the form

$$P^{+}(T\_{\max}-\lambda)^{-1}\imath\_{+} = (A\_0^{+} - \lambda)^{-1} - \gamma\_{+}(\lambda)\left(m\_{+}(\lambda) + m\_{-}(\lambda)\right)^{-1}\gamma\_{+}(\vec{\lambda})^{\*},$$

where P <sup>+</sup> denotes the orthogonal projection from L<sup>2</sup> <sup>r</sup>(a, b) onto L<sup>2</sup> <sup>r</sup>(c, b) and ι<sup>+</sup> is the corresponding canonical embedding. It follows from Theorem 2.7.3 and Theorem 2.7.4 (see also Corollary 4.6.2) that the compressed resolvent of the selfadjoint operator Tmax in L<sup>2</sup> <sup>r</sup>(a, b) gives rise to the Straus extensions of ˇ T <sup>+</sup> min in L2 <sup>r</sup>(c, b) corresponding to the boundary conditions

$$
\Gamma\_1^+ f\_+ = -m\_-(\lambda)\Gamma\_0^+ f \quad \text{or} \quad (pf\_+')(c) = -m\_-(\lambda)f\_+(c). \tag{6.6.1}
$$

Note that the family of Straus extensions is defined via the Weyl function <sup>ˇ</sup> <sup>m</sup>−(λ) of the Sturm–Liouville operator on the interval (a, c); in particular, this family is described by the boundary conditions (6.6.1) in which the eigenvalue parameter appears.

In the present treatment one stays close to the above context. More precisely, one assumes that the Sturm–Liouville operator is defined on an interval (c, b) where the endpoint c is regular and the limit-point condition prevails at b. The maximal operator in H = L<sup>2</sup> <sup>r</sup>(c, b) is denoted by T <sup>+</sup> max and the boundary triplet for the minimal operator T <sup>+</sup> min is denoted by {C, <sup>Γ</sup><sup>+</sup> <sup>0</sup> , Γ<sup>+</sup> <sup>1</sup> }, where Γ<sup>+</sup> <sup>0</sup> f<sup>+</sup> = f+(c) and Γ<sup>+</sup> <sup>1</sup> f<sup>+</sup> = (p+f- <sup>+</sup>)(c); cf. Proposition 6.4.1 and Section 6.4. The corresponding γ-field and Weyl function are denoted by γ<sup>+</sup> and m+, respectively. Now let τ be a scalar Nevanlinna function (which is not equal to a real constant). The interest is in boundary value problems of the form

$$
\Gamma\_1^+ f\_+ = -\tau(\lambda)\Gamma\_0^+ f\_+ \quad \text{or, equivalently,} \quad (p\_+ f\_+')(c) = -\tau(\lambda)f\_+(c); \tag{6.6.2}
$$

cf. (6.6.1). According to Theorem 4.2.4 (or Theorem 4.3.1), there exist a reproducing kernel Hilbert space H- = H(N<sup>τ</sup> ) (or an L2-space H- , respectively), a closed simple symmetric operator T in H- , and a boundary triplet {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} for the adjoint (T- )<sup>∗</sup> with γ-field γ and Weyl function τ . Observe that the closed symmetric operator T is not necessarily densely defined and (T- )∗ may not be an operator. Hence, to denote boundary triplets for the orthogonal sum of the maximal operators T <sup>+</sup> max <sup>⊕</sup> (T- )∗ the graph notation will be used. In particular, instead of Γ<sup>+</sup> <sup>0</sup> f<sup>+</sup> = f+(c) and Γ<sup>+</sup> <sup>1</sup> f<sup>+</sup> = (pf- +)(c) the notation Γ<sup>+</sup> 0 f <sup>+</sup> = f+(c) and Γ<sup>+</sup> 1 f <sup>+</sup> = (pf- <sup>+</sup>)(c) for f <sup>+</sup> = {f+, f- <sup>+</sup>} ∈ <sup>T</sup> <sup>+</sup> max is used. Thus,

$$
\widetilde{\Gamma}\_0 \begin{pmatrix} \widehat{f}\_+ \\ \widehat{k} \end{pmatrix} = \begin{pmatrix} \Gamma\_0^+ \widehat{f}\_+ \\ \Gamma\_0^\prime \widehat{k} \\ \Gamma\_0^\prime \widehat{k} \end{pmatrix} \quad \text{and} \quad \widetilde{\Gamma}\_1 \begin{pmatrix} \widehat{f}\_+ \\ \widehat{k} \end{pmatrix} = \begin{pmatrix} \Gamma\_1^+ \widehat{f}\_+ \\ \Gamma\_1^\prime \widehat{k} \end{pmatrix}.
$$

where f <sup>+</sup> = {f+, f- <sup>+</sup>} ∈ <sup>T</sup> <sup>+</sup> max and <sup>k</sup> <sup>=</sup> {k, k- } ∈ (T- )∗, defines a boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} for (<sup>T</sup> <sup>+</sup> min <sup>⊕</sup> <sup>T</sup>- )<sup>∗</sup> = T <sup>+</sup> max <sup>⊕</sup> (T- )∗ and

$$
\widetilde{A}\_0 := A\_0^+ \oplus A\_0' = \ker \widetilde{\Gamma}\_0,
$$

where A<sup>+</sup> <sup>0</sup> = ker Γ<sup>+</sup> <sup>0</sup> and A- <sup>0</sup> = ker Γ- <sup>0</sup>, is a self-adjoint extension of T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> in H ⊕ H- . It is clear that for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) = <sup>ρ</sup>(A<sup>+</sup> <sup>0</sup> ) ∩ ρ(A- <sup>0</sup>) the <sup>γ</sup>-field <sup>γ</sup> and Weyl function M <sup>5</sup> corresponding to the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} have the form

$$
\widetilde{\gamma}(\lambda) = \begin{pmatrix} \gamma\_+(\lambda) & 0 \\ 0 & \gamma'(\lambda) \end{pmatrix} \quad \text{and} \quad \widetilde{M}(\lambda) = \begin{pmatrix} m\_+(\lambda) & 0 \\ 0 & \tau(\lambda) \end{pmatrix};
$$

cf. Proposition 6.5.1. The self-adjoint extension <sup>A</sup> of <sup>T</sup> <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> in the next proposition is of special interest since its compressed resolvent corresponds to the Nevanlinna function τ via the Kre˘ın–Na˘ımark formula. Proposition 6.6.1 is a special case of Theorem 2.7.4 and Corollary 4.6.2.

**Proposition 6.6.1.** Let T <sup>+</sup> min and T be the closed simple symmetric operators with boundary triplets {C, <sup>Γ</sup><sup>+</sup> <sup>0</sup> , Γ<sup>+</sup> <sup>1</sup> } and {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} as above. Then

$$\tilde{A} = \left\{ \begin{pmatrix} \widehat{f}\_{+} \\ \widehat{k} \end{pmatrix} : \widehat{f}\_{+} \in T\_{\text{max}}^{+}, \widehat{k} \in (T')^{\*}, \Gamma\_{0}^{+} \widehat{f}\_{+} = \Gamma\_{0}^{\prime} \widehat{k}, \Gamma\_{1}^{+} \widehat{f}\_{+} = -\Gamma\_{1}^{\prime} \widehat{k} \right\} \tag{6.6.3}$$

is a self-adjoint relation in H ⊕ H and for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the resolvent of <sup>A</sup> has the form

$$\left(\tilde{A} - \lambda\right)^{-1} = \left(\tilde{A}\_0 - \lambda\right)^{-1} - \frac{1}{m\_+(\lambda) + \tau(\lambda)} \,\_1\tilde{\gamma}(\lambda) \begin{pmatrix} 1 & 1\\ 1 & 1 \end{pmatrix} \tilde{\gamma}(\tilde{\lambda})^\* \,\_1\tilde{\gamma}$$

The self-adjoint relation <sup>A</sup> satisfies the minimality condition

$$\mathfrak{H} \oplus \mathfrak{H}' = \overline{\operatorname{span}} \left\{ \mathfrak{H}, \operatorname{ran} \left( \tilde{A} - \lambda \right)^{-1} \iota\_{\mathfrak{H}} : \lambda \in \mathbb{C} \mid \mathbb{R} \right\},$$

and for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the compression of the resolvent (A <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> to <sup>H</sup> is given by

$$P\_{\mathfrak{H}}(\tilde{A} - \lambda)^{-1} \iota\_{\mathfrak{H}} = (A\_0^+ - \lambda)^{-1} - \gamma\_+(\lambda) \left( m\_+(\lambda) + \tau(\lambda) \right)^{-1} \gamma\_+(\overline{\lambda})^\*,\tag{6.6.4}$$

where P<sup>H</sup> : H ⊕ H- → H is the orthogonal projection from H ⊕ H onto H and ι<sup>H</sup> : H → H ⊕ H is the canonical embedding of H into H ⊕ H- .

The next result describes a particular boundary triplet {C<sup>2</sup>, <sup>Γ</sup><sup>0</sup>, <sup>Γ</sup><sup>1</sup>} for which the self-adjoint relation <sup>A</sup> in (6.6.3) coincides with the kernel of the boundary mapping <sup>Γ</sup><sup>0</sup>; cf. Proposition 4.6.4.

**Proposition 6.6.2.** Let T <sup>+</sup> min and T be closed symmetric operators in the Hilbert spaces H and H with boundary triplets {C, <sup>Γ</sup><sup>+</sup> <sup>0</sup> , Γ<sup>+</sup> <sup>1</sup> } and {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} and corresponding Weyl functions m<sup>+</sup> and τ , respectively, as in the beginning of this section. Then {C<sup>2</sup>, <sup>Γ</sup><sup>0</sup>, <sup>Γ</sup><sup>1</sup>}, where

$$
\widehat{\Gamma}\_0 \begin{pmatrix} \widehat{f}\_+ \\ \widehat{k} \end{pmatrix} = \begin{pmatrix} -\Gamma\_1^+ \widehat{f}\_+ - \Gamma\_1' \widehat{k} \\ \Gamma\_0^+ \widehat{f}\_+ - \Gamma\_0' \widehat{k} \end{pmatrix} \quad \text{and} \quad \widehat{\Gamma}\_1 \begin{pmatrix} \widehat{f}\_+ \\ \widehat{k} \end{pmatrix} = \begin{pmatrix} \Gamma\_0^+ \widehat{f}\_+ \\ -\Gamma\_1' \widehat{k} \end{pmatrix},
$$

with f <sup>+</sup> <sup>∈</sup> <sup>T</sup> <sup>+</sup> max , <sup>k</sup> <sup>∈</sup> (T- )∗, is a boundary triplet for T <sup>+</sup> max <sup>⊕</sup> (T- )<sup>∗</sup> such that the self-adjoint relation <sup>A</sup> in (6.6.3) corresponds to the boundary mapping <sup>Γ</sup>0, that is,

$$
\tilde{A} = \ker \widehat{\Gamma}\_0.
$$

The Weyl function corresponding to {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} is given by

$$
\widehat{M}(\lambda) = \begin{pmatrix} 1 & \tau(\lambda) \\ -\overline{m\_+(\lambda) + \tau(\lambda)} & \overline{m\_+(\lambda) + \tau(\lambda)} \\ \tau(\lambda) & m(\lambda)\tau(\lambda) \\ \overline{m\_+(\lambda) + \tau(\lambda)} & \overline{m\_+(\lambda) + \tau(\lambda)} \end{pmatrix}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{6.6.5}
$$

Next it is shown that the Weyl function M shows up in the integral representation of the compressed resolvent <sup>R</sup>(λ) = <sup>P</sup>H(A <sup>−</sup> <sup>λ</sup>)−1ι<sup>H</sup> of <sup>A</sup>. The next result and its proof are similar to Proposition 6.5.4 and its proof.

**Proposition 6.6.3.** For <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the compressed resolvent of the self-adjoint extension <sup>A</sup> in (6.6.3) is an integral operator of the form

$$(R(\lambda)g\_+)(t) = \int\_c^b G(t, s, \lambda)g\_+(s)r(s) \, ds, \quad g\_+ \in L^2\_r(c, b), \tag{6.6.6}$$

where the Green function G(t, s, λ) is given by

$$\begin{cases} \frac{-1}{m\_{+}(\lambda)+\tau(\lambda)}(u\_{1}(t,\lambda)+m\_{+}(\lambda)u\_{2}(t,\lambda))(u\_{1}(s,\lambda)-\tau(\lambda)u\_{2}(s,\lambda)), & c$$

In particular, if <sup>g</sup><sup>+</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(c, b) has compact support, then

$$\begin{aligned} (R(\lambda)g\_+)(t) &= \begin{pmatrix} u\_1(t,\lambda) & u\_2(t,\lambda) \end{pmatrix} \widehat{M}(\lambda) \begin{pmatrix} \int\_c^b u\_1(s,\lambda)g\_+(s)r(s) \, ds \\ \int\_c^b u\_2(s,\lambda)g\_+(s)r(s) \, ds \end{pmatrix} \\ -u\_1(t,\lambda) \int\_t^b u\_2(s,\lambda)g\_+(s)r(s) \, ds - u\_2(t,\lambda) \int\_c^t u\_1(s,\lambda)g\_+(s)r(s) \, ds, \\ &\asymp\_\lambda \end{aligned}$$

where the 2 × 2 matrix M (λ) is given by (6.6.5). Proof. Observe that for <sup>g</sup><sup>+</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(c, b) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the function <sup>f</sup>+(·, λ) given by

$$\begin{aligned} &-\left(m\_{+}(\lambda)+\tau(\lambda)\right)f\_{+}(t,\lambda) \\ &=\left(u\_{1}(t,\lambda)+m\_{+}(\lambda)u\_{2}(t,\lambda)\right)\int\_{c}^{t}\left(u\_{1}(s,\lambda)-\tau(\lambda)u\_{2}(s,\lambda)\right)g\_{+}(s)r(s)\,ds \\ &+\left(u\_{1}(t,\lambda)-\tau(\lambda)u\_{2}(t,\lambda)\right)\int\_{t}^{b}\left(u\_{1}(s,\lambda)+m\_{+}(\lambda)u\_{2}(s,\lambda)\right)g\_{+}(s)r(s)\,ds \end{aligned}$$

is well defined. Moreover, it satisfies (L−λ)f<sup>+</sup> = g<sup>+</sup> and it has the following initial values at c:

$$f\_{+}(c,\lambda) = -\frac{(g\_{+},\gamma\_{+}(\lambda))\_{L^{2}\_{r}(c,b)}}{m\_{+}(\lambda)+\tau(\lambda)} \quad \text{and} \quad (pf'\_{+})(c,\lambda) = \frac{\tau(\lambda)(g\_{+},\gamma\_{+}(\lambda))\_{L^{2}\_{r}(c,b)}}{m\_{+}(\lambda)+\tau(\lambda)}.$$

Now consider the function <sup>h</sup><sup>+</sup> <sup>=</sup> <sup>R</sup>(λ)g<sup>+</sup> <sup>=</sup> <sup>P</sup>H(A <sup>−</sup> <sup>λ</sup>)−1ιHg+, which is given by the Kre˘ın formula in (6.6.4). This leads to the equality

$$h\_{+}(\cdot,\lambda) = (A\_0^+ - \lambda)^{-1}g\_+ - \frac{1}{m\_+(\lambda) + \tau(\lambda)}\gamma\_+(\lambda)(g\_+, \gamma\_+(\overline{\lambda}))\_{L^2\_r(c,b)}.$$

Since (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)(A<sup>+</sup> <sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1g<sup>+</sup> <sup>=</sup> <sup>g</sup><sup>+</sup> and (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)γ+(λ) = 0, one has (<sup>L</sup> <sup>−</sup> <sup>λ</sup>)h<sup>+</sup> <sup>=</sup> <sup>g</sup>+. From γ+(c, λ) = 1 and ((A<sup>+</sup> <sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1g+)(c) = 0 one concludes that

$$h\_+(c, \lambda) = -\frac{(g\_+, \gamma\_+(\lambda))\_{L^2\_r(c,b)}}{m\_+(\lambda) + \tau(\lambda)}.$$

Furthermore, since (pγ- <sup>+</sup>)(c, λ) = m+(λ) and

$$\Gamma\left(p((A\_0^+ - \lambda)^{-1}g\_+)'\right)(c) = \Gamma\_1(A\_0^+ - \lambda)^{-1}g\_+ = \gamma\_+(\tilde{\lambda})^\*g\_+ = (g\_+, \gamma\_+(\tilde{\lambda}))\_{L^2\_r(c,b)},$$

it also follows that

$$\begin{split} \tau(ph\_+')(c,\lambda) &= (g\_+, \gamma\_+(\overline{\lambda}))\_{L^2\_r(c,b)} - \frac{m\_+(\lambda)(g\_+, \gamma\_+(\lambda))\_{L^2\_r(c,b)}}{m\_+(\lambda) + \tau(\lambda)} \\ &= \frac{\tau(\lambda)(g\_+, \gamma\_+(\overline{\lambda}))\_{L^2\_r(c,b)}}{m\_+(\lambda) + \tau(\lambda)}. \end{split}$$

Therefore, f+(·, λ) and h+(·, λ)=(R(λ)g+)(·) satisfy the same differential equation and the same initial conditions. Consequently, f<sup>+</sup> = R(λ)g+, which yields (6.6.6).

Now assume that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(c, b) has compact support. Then writing out all the products in the Green function is allowed, as each individual integral is well defined. This rewriting of the terms of the function f(·, λ) gives eight terms, which after adding and substracting of the terms

$$
\tau(\lambda)u\_1(t,\lambda) \int\_t^b u\_2(s,\lambda)g(s)r(s) \, ds
$$

and

$$\tau(\lambda)u\_2(t,\lambda)\int\_c^t u\_1(s,\lambda)g(s)r(s)\,ds$$

and regrouping leads to the desired result. -

Returning to the boundary value problem (6.6.2) and taking into account Theorem 2.7.3 it is clear that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>g</sup><sup>+</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(c, b) the unique solution of

$$(L - \lambda)f\_{+} = g\_{+}, \qquad (p\_{+}f\_{+}^{\prime})(c) = -\tau(\lambda)f\_{+}(c), \tag{6.6.7}$$

is given by R(λ)g<sup>+</sup> in Proposition 6.6.3. The last condition in (6.6.7) is an example of λ-dependent boundary conditions. If τ is the Weyl function corresponding to a Sturm–Liouville operator on (a, c) such that the endpoint a is in the limit-point case and <sup>c</sup> is regular, then the exit space extension <sup>A</sup> coincides with the maximal operator Sturm–Liouville on (a, b); this is the situation discussed in Section 6.5. For the special case where τ is a linear or rational Nevanlinna function the model space and hence the corresponding exit space extension <sup>A</sup> can be constructed explicitly; cf. Example 4.3.3.

## **6.7 Weyl functions and subordinate solutions**

Consider the Sturm–Liouville equation on the interval (a, b) and assume that the endpoint a is regular and that the endpoint b is in the limit-point case (when replacing derivatives by quasi-derivatives the following discussion extends in a natural fashion to the situation where <sup>a</sup> is in the limit-circle case). Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for Tmax in Proposition 6.4.1. The spectrum of the selfadjoint extension A<sup>0</sup> with dom A<sup>0</sup> = ker Γ<sup>0</sup> will be studied by means of subordinate solutions of the equation (L − λ)y = 0.

Note that for each x>a one can define the Hilbert space L<sup>2</sup> <sup>r</sup>(a, x) with the inner product

$$(f,g)\_{L^2\_r(a,x)} = \int\_a^x f(t)\overline{g(t)}r(t) \, dt, \quad f,g \in L^2\_r(a,x).$$

For fixed f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, c), a<c<b, the function x → (f,g)<sup>x</sup> is absolutely continuous and

$$\frac{d}{dx}(f,g)\_{L^2\_r(a,x)} = f(x)\overline{g(x)}r(x)$$

almost everywhere on (a, c). The norm corresponding to (·, ·)L<sup>2</sup> <sup>r</sup>(a,x) will be denoted by · L<sup>2</sup> <sup>r</sup>(a,x); it will play an important role in the estimates in this section.

**Definition 6.7.1.** Let <sup>ξ</sup> <sup>∈</sup> <sup>R</sup>. A solution <sup>v</sup>(·, ξ) of (<sup>L</sup> <sup>−</sup> <sup>ξ</sup>)<sup>y</sup> = 0 is said to be subordinate at b if

$$\lim\_{x \to b} \frac{\|v(\cdot, \xi)\|\_{L^2\_r(a, x)}}{\|u(\cdot, \xi)\|\_{L^2\_r(a, x)}} = 0$$

for every solution u(·, ξ) of (L − ξ)y = 0 which is not a scalar multiple of v(·, ξ).

The spectrum of A<sup>0</sup> will be studied in terms of solutions of the differential equation (<sup>L</sup> <sup>−</sup> <sup>ξ</sup>)<sup>y</sup> = 0 which do not necessarily belong to <sup>L</sup><sup>2</sup> r(a, b). In fact, the interest will be in subordinate solutions which satisfy or do not satisfy the boundary condition f(a) = 0 which characterizes dom A0. Observe that if a solution v(·, ξ) of (<sup>L</sup> <sup>−</sup> <sup>ξ</sup>)<sup>y</sup> = 0 belongs to <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b), then it is subordinate at b since b is in the limit-point case and hence any other solution which is not a scalar multiple of <sup>v</sup>(·, ξ) does not belong to <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b).

Before the main result can be stated some preliminary considerations are necessary. Recall first the transformation of the boundary triplet in (6.4.7), where <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞}. This results in a boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> }, where

$$\begin{aligned} \Gamma\_0^\tau f &= \frac{\tau}{\sqrt{\tau^2 + 1}} f(a) - \frac{1}{\sqrt{\tau^2 + 1}} (pf')(a), \\ \Gamma\_1^\tau f &= \frac{1}{\sqrt{\tau^2 + 1}} f(a) + \frac{\tau}{\sqrt{\tau^2 + 1}} (pf')(a), \end{aligned}$$

for f ∈ dom Tmax , with corresponding γ-field and Weyl function given by

$$\gamma\_{\tau}(\lambda) = \frac{\gamma(\lambda)}{\tau - M(\lambda)} \sqrt{\tau^2 + 1} \quad \text{and} \quad M\_{\tau}(\lambda) = \frac{1 + \tau M(\lambda)}{\tau - M(\lambda)}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{6.7.1}$$

It follows from Proposition 2.3.6 (iii) that

$$\frac{M\_{\tau}(\lambda) - M\_{\tau}(\mu)^{\*}}{\lambda - \overline{\mu}} = \gamma\_{\tau}(\mu)^{\*}\gamma\_{\tau}(\lambda), \quad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}. \tag{6.7.2}$$

The fundamental system (v1(·, λ); <sup>v</sup>2(·, λ)), <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, in Lemma 6.4.5 given by

$$
\begin{pmatrix} v\_1(\cdot,\lambda) \\ v\_2(\cdot,\lambda) \end{pmatrix} = \frac{1}{\sqrt{\tau^2 + 1}} \begin{pmatrix} \tau & -1 \\ 1 & \tau \end{pmatrix} \begin{pmatrix} u\_1(\cdot,\lambda) \\ u\_2(\cdot,\lambda) \end{pmatrix},
$$

satisfies the initial conditions

$$
\begin{pmatrix}
v\_1(a,\lambda) & v\_2(a,\lambda) \\ (pv\_1')(a,\lambda) & (pv\_2')(a,\lambda)
\end{pmatrix} = \frac{1}{\sqrt{\tau^2 + 1}} \begin{pmatrix}
\tau & -1 \\ 1 & \tau
\end{pmatrix} \cdot \vec{\tau}
$$

In terms of this fundamental system the γ-field γ<sup>τ</sup> (λ) can then be expressed as

$$\gamma\_{\tau}(\cdot,\lambda) = v\_1(\cdot,\lambda) + M\_{\tau}(\lambda)v\_2(\cdot,\lambda);\tag{6.7.3}$$

cf. Lemma 6.4.5. Note that the formal solution

$$v\_2(\cdot,\lambda) = \frac{1}{\sqrt{\tau^2 + 1}} \left( u\_1(\cdot,\lambda) + \tau u\_2(\cdot,\lambda) \right) \tag{6.7.4}$$

satisfies the boundary condition (pf- )(a) = τf(a) for the functions in the domain ker (Γ<sup>1</sup> <sup>−</sup> <sup>τ</sup>Γ0) of the self-adjoint realization <sup>A</sup><sup>τ</sup> , <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞}.

Using the fundamental system (v1(·, λ); <sup>v</sup>2(·, λ)), define for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, x), a<x<b,

$$\begin{aligned} (\mathcal{H}(\lambda)h)(t) &= v\_1(t,\lambda) \int\_a^t v\_2(s,\lambda)h(s)r(s) \, ds \\ &- v\_2(t,\lambda) \int\_a^t v\_1(s,\lambda)h(s)r(s) \, ds, \quad t \in (a,x), \quad \lambda \in \mathbb{C}. \end{aligned}$$

Then <sup>H</sup>(λ) is a well-defined integral operator and one sees that for fixed <sup>λ</sup> <sup>∈</sup> <sup>C</sup> the function f(t, λ)=(H(λ)h)(t) is absolutely continuous and satisfies

$$(L - \lambda)f = h, \quad f(a) = 0, \ (pf')(a) = 0. \tag{6.7.5}$$

In particular, H(λ) maps L<sup>2</sup> <sup>r</sup>(a, x) into itself. It follows directly that

$$v\_i(\cdot,\lambda) - v\_i(\cdot,\mu) = (\lambda - \mu)\mathcal{H}(\lambda)v\_i(\cdot,\mu), \quad i = 1,2,\tag{6.7.6}$$

since the left-hand side and the right-hand side satisfy the same differential equation and the same initial conditions at a; cf. (6.7.5).

**Lemma 6.7.2.** Let a<x<b and let <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, x). Then

$$2\|\mathcal{H}(\lambda)h\|\_{L^{2}\_{r}(a,x)}^{2} \le 2\|v\_{1}(\cdot,\lambda)\|\_{L^{2}\_{r}(a,x)}^{2} \|v\_{2}(\cdot,\lambda)\|\_{L^{2}\_{r}(a,x)}^{2} \|h\|\_{L^{2}\_{r}(a,x)}^{2}.$$

Proof. For <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, x) define the functions gi(·, λ), i = 1, 2, by

$$g\_i(t, \lambda) = \int\_a^t v\_i(s, \lambda) h(s) r(s) \, ds.$$

The Cauchy–Schwarz inequality then gives

$$|g\_i(t, \lambda)|^2 \le ||v\_i(\cdot, \lambda)||\_{L^2\_r(a, t)}^2 ||h||\_{L^2\_r(a, t)}^2, \quad i = 1, 2.$$

Hence, with f(t, λ)=(H(λ)h)(t), it follows from the definition of H(λ) that

$$\begin{split} \left| |f(t,\lambda)|^{2} \right| \leq & 2\left( |v\_{1}(t,\lambda)|^{2} |g\_{2}(t,\lambda)|^{2} + |v\_{2}(t,\lambda)|^{2} |g\_{1}(t,\lambda)|^{2} \right) \\ \leq & 2\left( |v\_{1}(t,\lambda)|^{2} \|v\_{2}(\cdot,\lambda)\|\_{L^{2}\_{r}(a,t)}^{2} \|h\|\_{L^{2}\_{r}(a,t)}^{2} \right. \\ & \left. + |v\_{2}(t,\lambda)|^{2} \|v\_{1}(\cdot,\lambda)\|\_{L^{2}\_{r}(a,t)}^{2} \|h\|\_{L^{2}\_{r}(a,t)}^{2} \right). \end{split}$$

Integration of this inequality yields

$$\begin{aligned} &\|\|f(\cdot,\lambda)\|\|\_{L^2\_r(a,x)}^2 \\ &\le 2\int\_a^x \left(|v\_1(t,\lambda)|^2\|\|v\_2(\cdot,\lambda)\|\|\_{L^2\_r(a,t)}^2\|\|h\|\|\_{L^2\_r(a,t)}^2\right) \\ &\qquad + |v\_2(t,\lambda)|^2\|v\_1(\cdot,\lambda)\|\|\_{L^2\_r(a,t)}^2\|\|h\|\|\_{L^2\_r(a,t)}^2\right)r(t)\,dt, \\ &\quad = 2\int\_a^x \left(\frac{d}{dt}\left(\|v\_1(\cdot,\lambda)\|\_{L^2\_r(a,t)}^2\|v\_2(\cdot,\lambda)\|\_{L^2\_r(a,t)}^2\right)\right) \|h\|\|\_{L^2\_r(a,t)}^2\,dt, \end{aligned}$$

since for i = 1, 2,

$$\frac{d}{dt} \|v\_i(\cdot, \lambda)\|\_{L^2\_r(a, t)}^2 = \frac{d}{dt} \int\_a^t |v\_i(s, \lambda)|^2 r(s) \, ds = |v\_i(t, \lambda)|^2 r(t).$$

Therefore,

$$\|f(\cdot,\lambda)\|\_{L^{2}\_{r}(a,x)}^{2} \le 2\|h\|\_{L^{2}\_{r}(a,x)}^{2} \int\_{a}^{x} \frac{d}{dt} \left(\|v\_{1}(\cdot,\lambda)\|\_{L^{2}\_{r}(a,t)}^{2} \|v\_{2}(\cdot,\lambda)\|\_{L^{2}\_{r}(a,t)}^{2}\right)dt,$$

which implies the assertion. -

**Lemma 6.7.3.** Let <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> be a fixed number. The function <sup>x</sup> → <sup>ε</sup><sup>τ</sup> (x, ξ) given by

$$\sqrt{2}\,\varepsilon\_{\tau}(x,\xi)\|v\_{1}(\cdot,\xi)\|\_{L^{2}\_{r}(a,x)}\|v\_{2}(\cdot,\xi)\|\_{L^{2}\_{r}(a,x)} = 1, \quad a < x < b,$$

is well defined, continuous, nonincreasing, and satisfies

$$\lim\_{x \to b} \varepsilon\_{\tau}(x, \xi) = 0.$$

Proof. For any a<x<b the two functions

$$x \mapsto \|v\_1(\cdot,\xi)\|\_{L^2\_r(a,x)} \quad \text{and} \quad x \mapsto \|v\_2(\cdot,\xi)\|\_{L^2\_r(a,x)}$$

have positive values, so clearly ε<sup>τ</sup> (x, ξ) > 0 is well defined. Note that the mapping x → v1(·, ξ) L<sup>2</sup> r(a,x) v2(·, ξ) L<sup>2</sup> <sup>r</sup>(a,x) is continuous and nondecreasing. The assumption that b is in the limit-point case implies that not both v1(·, ξ) and <sup>v</sup>2(·, ξ) belong to <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). Thus, the limit result follows. -

The function x → ε<sup>τ</sup> (x, ξ) appears in the estimate in the following theorem.

**Theorem 6.7.4.** Let M<sup>τ</sup> be the Weyl function in (6.7.1) corresponding to the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> }. Assume that <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> and let <sup>ε</sup><sup>τ</sup> (x, ξ) be as in Lemma 6.7.3. Then for a<x<b

$$\frac{1}{d\_0} \le \frac{||v\_2(\cdot,\xi)||\_{L^2\_r(a,x)}}{||v\_1(\cdot,\xi)||\_{L^2\_r(a,x)}} \left| M\_\tau(\xi + i\varepsilon\_\tau(x,\xi)) \right| \le d\_0,$$

where <sup>d</sup><sup>0</sup> =1+2√2 + 2 + <sup>√</sup><sup>2</sup> .

Proof. Assume that <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> and let ε > 0. Define the function <sup>ψ</sup>(·, ξ, ε) by

$$
\psi(\cdot,\xi,\varepsilon) = v\_1(\cdot,\xi) + M\_\tau(\xi+i\varepsilon)v\_2(\cdot,\xi). \tag{6.7.7}
$$

Then for a<x<b,

$$\left| \parallel \left| v\_2(\cdot, \xi) \right| \vert\_{L^2\_r(a,x)} \mid M\_\tau(\xi + i\varepsilon) \right| - \left| \left| v\_1(\cdot, \xi) \right| \vert\_{L^2\_r(a,x)} \right| \leq \left\| \psi(\cdot, \xi, \varepsilon) \right\|\_{L^2\_r(a,x)}$$

#### 6.7. Weyl functions and subordinate solutions 429

or, equivalently,

$$\left|\frac{\|v\_2(\cdot,\xi)\|\_{L^2\_r(a,x)}}{\|v\_1(\cdot,\xi)\|\_{L^2\_r(a,x)}}\left|M\_\tau(\xi+i\varepsilon)\right|-1\right| \le \frac{\|\psi(\cdot,\xi,\varepsilon)\|\_{L^2\_r(a,x)}}{\|v\_1(\cdot,\xi)\|\_{L^2\_r(a,x)}}.\tag{6.7.8}$$

The term on the right-hand side of (6.7.8) will now be estimated in a suitable way. In the definition (6.7.7) rewrite the right-hand side using the identity in (6.7.6) with λ = ξ and μ = ξ + iε. Together with (6.7.3) this shows the identity

$$
\psi(\cdot,\xi,\varepsilon) = \gamma\_\tau(\cdot,\xi+i\varepsilon) - i\varepsilon \mathcal{H}(\xi)\gamma\_\tau(\cdot,\xi+i\varepsilon),
$$

expressing the function ψ(·, ξ, ε) directly in terms of the γ-field. It follows from Lemma 6.7.2 that

$$\begin{aligned} & \| \psi(\cdot,\xi,\varepsilon) \|\_{L^2\_r(a,x)} \\ & \qquad \le \left( 1 + \sqrt{2} \varepsilon \, \| v\_1(\cdot,\xi) \|\_{L^2\_r(a,x)} \| v\_2(\cdot,\xi) \|\_{L^2\_r(a,x)} \right) \| \gamma\_\tau(\cdot,\xi + i\varepsilon) \|\_{L^2\_r(a,x)} \cdot \gamma\_\tau(\| v\_1(\cdot,\xi) \|\_{L^2\_r(a,x)} + \| v\_2(\cdot,\xi) \|\_{L^2\_{\tau'}(a,x)}) \lesssim \varepsilon \, \end{aligned}$$

Therefore, the right-hand side of (6.7.8) is estimated by

$$\begin{split} & \frac{\left(1+\sqrt{2}\,\varepsilon\,\|\,\boldsymbol{v}\_{1}(\cdot,\boldsymbol{\xi})\|\|\_{L^{2}\_{r}(a,x)}\|\|\boldsymbol{v}\_{2}(\cdot,\boldsymbol{\xi})\|\|\_{L^{2}\_{r}(a,x)}\right)\|\gamma\_{\tau}(\cdot,\boldsymbol{\xi}+i\varepsilon)\|\|\_{L^{2}\_{r}(a,x)}}{\|\boldsymbol{v}\_{1}(\cdot,\boldsymbol{\xi})\|\|\_{L^{2}\_{r}(a,x)}} \\ &=\frac{1+\sqrt{2}\,\varepsilon\,\|\boldsymbol{v}\_{1}(\cdot,\boldsymbol{\xi})\|\|\_{L^{2}\_{r}(a,x)}\|\boldsymbol{v}\_{2}(\cdot,\boldsymbol{\xi})\|\|\_{L^{2}\_{r}(a,x)}}{\|\left(\|\boldsymbol{v}\_{1}(\cdot,\boldsymbol{\xi})\|\|\_{L^{2}\_{r}(a,x)}\|\boldsymbol{v}\_{2}(\cdot,\boldsymbol{\xi})\|\|\_{L^{2}\_{r}(a,x)}\right)^{\frac{1}{2}}}\frac{\|\boldsymbol{v}\_{2}(\cdot,\boldsymbol{\xi})\|\|\_{L^{2}\_{r}(a,x)}^{\frac{1}{2}}}{\|\boldsymbol{v}\_{1}(\cdot,\boldsymbol{\xi})\|\|\_{L^{2}\_{r}(a,x)}^{\frac{1}{2}}}\|\gamma\_{\tau}(\cdot,\boldsymbol{\xi}+i\varepsilon)\|\_{L^{2}\_{r}(a,x)}. \end{split}$$

Now observe that γ<sup>τ</sup> (·, ξ + iε) L<sup>2</sup> <sup>r</sup>(a,x) ≤ γ<sup>τ</sup> (·, ξ + iε) L<sup>2</sup> <sup>r</sup>(a,b) and it follows from (6.7.2) that

$$\|\gamma\_{\tau}(\cdot,\xi+i\varepsilon)\|\_{L^{2}\_{r}(a,b)} \leq \sqrt{\frac{\mathrm{Im}\,M\_{\tau}(\xi+i\varepsilon)}{\varepsilon}} \leq \sqrt{\frac{|M\_{\tau}(\xi+i\varepsilon)|}{\varepsilon}}.$$

Thus, for any ε > 0,

$$\begin{split} & \left| \frac{\|\upsilon\_{2}(\cdot,\xi)\|\|\_{L^{2}\_{r}(a,x)}}{\|\upsilon\_{1}(\cdot,\xi)\|\|\_{L^{2}\_{r}(a,x)}} |M\_{\tau}(\xi+i\varepsilon)|-1 \right| \\ & \leq \frac{1+\sqrt{2}\varepsilon \, \|\upsilon\_{1}(\cdot,\xi)\|\|\_{L^{2}\_{r}(a,x)} \|\upsilon\_{2}(\cdot,\xi)\|\|\_{L^{2}\_{r}(a,x)}}{(\varepsilon\, \|\upsilon\_{1}(\cdot,\xi)\|\|\_{L^{2}\_{r}(a,x)} \|\upsilon\_{2}(\cdot,\xi)\|\|\_{L^{2}\_{r}(a,x)})^{\frac{1}{2}}} \left( \frac{\|\upsilon\_{2}(\cdot,\xi)\|\|\_{L^{2}\_{r}(a,x)}}{\|\upsilon\_{1}(\cdot,\xi)\|\|\_{L^{2}\_{r}(a,x)}} \, |M\_{\tau}(\xi+i\varepsilon)| \right)^{\frac{1}{2}}. \end{split}$$

Now for <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> choose <sup>ε</sup> <sup>=</sup> <sup>ε</sup><sup>τ</sup> (x, ξ) in this estimate. This choice minimizes the first factor on the right-hand side to 25/4. Hence, the nonnegative quantity

$$Q = \frac{||v\_2(\cdot,\xi)||\_{L^2\_r(a,x)}}{||v\_1(\cdot,\xi)||\_{L^2\_r(a,x)}} \left| M\_\tau(\xi + i\varepsilon\_\tau(x,\xi)) \right|.$$

satisfies the inequality

$$|Q - 1| \le 2^{5/4} Q^{\frac{1}{2}}$$

or, equivalently, <sup>Q</sup><sup>2</sup> <sup>−</sup>2Q+ 1 <sup>≤</sup> <sup>4</sup> <sup>√</sup>2Q. Therefore, 1/d<sup>0</sup> <sup>≤</sup> <sup>Q</sup> <sup>≤</sup> <sup>d</sup>0, which completes the proof. - The following result is now a direct consequence of Theorem 6.7.4.

**Theorem 6.7.5.** Let M be the Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} and let <sup>ξ</sup> <sup>∈</sup> <sup>R</sup>. Then the following statements hold:

(i) If <sup>τ</sup> <sup>∈</sup> <sup>R</sup>, then the solution <sup>u</sup>1(·, ξ)+τu2(·, ξ) of (L−ξ)<sup>y</sup> = 0, (py- )(a) = τy(a) (which is unique up to scalar multiples) is subordinate if and only if

$$\lim\_{\varepsilon \downarrow 0} M(\xi + i\varepsilon) = \tau.$$

(ii) If τ = ∞, then the solution u2(·, ξ) of (L−ξ)y = 0, y(a)=0(which is unique up to scalar multiples) is subordinate if and only if

$$\lim\_{\varepsilon \downarrow 0} |M(\xi + i\varepsilon)| = \infty.$$

Proof. Since x → ε<sup>τ</sup> (x, ξ) is continuous, nonincreasing, and with limit 0 as x → b, one has the identity

$$\lim\_{\varepsilon \downarrow 0} M\_{\tau}(\xi + i\varepsilon) = \lim\_{x \to b} M\_{\tau}(\xi + i\varepsilon\_{\tau}(x, \xi)).$$

(i) Assume that <sup>τ</sup> <sup>∈</sup> <sup>R</sup>. It suffices to show that <sup>|</sup>M<sup>τ</sup> (<sup>ξ</sup> <sup>+</sup> iε)|→∞ for <sup>ε</sup> <sup>↓</sup> 0 if and only if the solution

$$v\_2(\cdot,\xi) = \frac{1}{\sqrt{\tau^2 + 1}} (u\_1(\cdot,\xi) + \tau u\_2(\cdot,\xi))$$

in (6.7.4) is subordinate. For this, assume first that |M<sup>τ</sup> (ξ + iε)|→∞. Then, by Theorem 6.7.4,

$$\lim\_{x \to b} \frac{\|v\_2(\cdot, \xi)\|\_{L^2\_r(a, x)}}{\|v\_1(\cdot, \xi)\|\_{L^2\_r(a, x)}} = 0. \tag{6.7.9}$$

Hence, for any <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>R</sup>, <sup>c</sup><sup>1</sup> = 0, one obtains from (6.7.9) that

$$\lim\_{x \to b} \frac{\|v\_2(\cdot,\xi)\|\_{L^2\_r(a,x)}}{\|c\_1 v\_1(\cdot,\xi) + c\_2 v\_2(\cdot,\xi)\|\_{L^2\_r(a,x)}} = 0,\tag{6.7.10}$$

and therefore the solution v2(·, ξ) is subordinate. Conversely, assume that v2(·, ξ) is subordinate, so that (6.7.10) holds for all <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>R</sup>, <sup>c</sup><sup>1</sup> = 0. Then clearly (6.7.9) holds, and from Theorem 6.7.4 it follows that |M<sup>τ</sup> (ξ + iε)|→∞.

It is a consequence of (6.7.1) that for ε ↓ 0 one has

$$|M\_{\tau}(\xi + i\varepsilon)| \to \infty \quad \Leftrightarrow \quad M(\xi + i\varepsilon) \to \tau.$$

This establishes the assertion for <sup>τ</sup> <sup>∈</sup> <sup>R</sup>.

(ii) The case <sup>τ</sup> <sup>=</sup> <sup>∞</sup> can be treated in the same way as (i). -

Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>T</sup>max in Proposition 6.4.1 and recall that A<sup>0</sup> corresponds to the Dirichlet boundary condition

$$f(a) = 0.\tag{6.7.11}$$

Let <sup>M</sup> be the Weyl function of {C, <sup>Γ</sup>0, <sup>Γ</sup>1} with the integral representation

$$M(\lambda) = \alpha + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) d\tau(t), \tag{6.7.12}$$

where <sup>α</sup> <sup>∈</sup> <sup>R</sup> and the measure <sup>τ</sup> satisfies

$$\int\_{\mathbb{R}} \frac{1}{t^2 + 1} \, d\tau(t) < \infty;$$

cf. Theorem A.2.5. On the basis of Theorem 6.7.5 the following sets will be introduced.

**Definition 6.7.6.** With the Sturm–Liouville equation (<sup>L</sup> <sup>−</sup> <sup>ξ</sup>)<sup>y</sup> = 0, <sup>ξ</sup> <sup>∈</sup> <sup>R</sup>, the following subsets of R are associated:


It is a direct consequence of Definition 6.7.6 that

<sup>R</sup> <sup>=</sup> <sup>M</sup><sup>c</sup> <sup>M</sup>ac <sup>M</sup>s, <sup>M</sup> <sup>=</sup> <sup>M</sup>ac <sup>M</sup>s, and <sup>M</sup><sup>s</sup> <sup>=</sup> <sup>M</sup>sc <sup>M</sup><sup>p</sup>

hold, where stands for the disjoint union.

The following proposition is based on Corollary 3.1.8, where minimal supports for the various parts of the measure τ in the integral representation (6.7.12) of M are described in terms of the boundary behavior of the Nevanlinna function M.

**Proposition 6.7.7.** Let M be the Weyl function associated with the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} and let <sup>τ</sup> be the corresponding measure in (6.7.12). Then the sets

$$
\mathcal{M}, \mathcal{M}\_{\mathrm{ac}}, \mathcal{M}\_{\mathrm{s}}, \mathcal{M}\_{\mathrm{sc}}, \mathcal{M}\_{\mathrm{sc}}, \mathcal{M}\_{\mathrm{P}},
$$

are minimal supports for the measures

$$\tau, \tau\_{\rm ac}, \tau\_{\rm s}, \tau\_{\rm sc}, \tau\_{\rm P}, \tau\_{\rm P}, \tau$$

respectively.

Proof. Step 1. It will be shown that the set Mac is a minimal support for the measure <sup>τ</sup>ac. According to Theorem 6.7.5, <sup>M</sup>ac coincides with the set of all <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> for which both conditions

$$\lim\_{\varepsilon \downarrow 0} M(\xi + i\varepsilon) \in \mathbb{R} \quad \text{and} \quad \lim\_{\varepsilon \downarrow 0} |M(\xi + i\varepsilon)| = \infty$$

are not satisfied. Hence, ξ ∈ Mac if and only if

$$\lim\_{\varepsilon \downarrow 0} M(\xi + i\varepsilon) \in \mathbb{R} \quad \text{or} \quad \lim\_{\varepsilon \downarrow 0} |M(\xi + i\varepsilon)| = \infty.$$

Recall that the set M- ac, defined by

$$\mathcal{M}'\_{\rm ac} = \left\{ \xi \in \mathbb{R} \, : \, 0 < \lim\_{\varepsilon \downarrow 0} \operatorname{Im} M(\xi + i\varepsilon) < \infty \right\}, \tag{6.7.13}$$

is a minimal support for τac; see Corollary 3.1.8. The following identity and inclusion are straightforward consequences of the definitions

$$\mathcal{M}'\_{\mathrm{ac}} \mid \mathcal{M}\_{\mathrm{ac}} = \left\{ \xi \in \mathcal{M}'\_{\mathrm{ac}} : \lim\_{\varepsilon \downarrow 0} |M(\xi + i\varepsilon)| = \infty \right\},$$

Mac \ M- ac ⊂ - <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> : lim<sup>ε</sup> <sup>↓</sup> <sup>0</sup> <sup>M</sup>(<sup>ξ</sup> <sup>+</sup> iε) does not exist in <sup>C</sup> ∪ {∞} .

Hence, it follows from Corollary 3.1.7 that

$$m(\mathcal{M}'\_{\rm ac} \mid \mathcal{M}\_{\rm ac}) = 0 \quad \text{and} \quad m(\mathcal{M}\_{\rm ac} \mid \mathcal{M}'\_{\rm ac}) = 0,\tag{6.7.14}$$

where m denotes the Lebesgue measure. However, since τac is absolutely continuous with respect to m, it also follows that τac(M- ac \ Mac) = 0. Therefore, Mac is a minimal support for τac; cf. Lemma 3.1.1.

Step 2. It will be shown that the set M<sup>s</sup> is a minimal support for the measure τs. According to Theorem 6.7.5, M<sup>s</sup> admits the following description

$$\mathcal{M}\_{\mathfrak{s}} = \left\{ \xi \in \mathbb{R} \, : \, \lim\_{\varepsilon \downarrow 0} |M(\xi + i\varepsilon)| = \infty \right\}.$$

Observe that the set

$$\mathcal{M}'\_{\mathfrak{s}} = \left\{ \xi \in \mathbb{R} \, : \, \lim\_{\varepsilon \downarrow 0} \mathrm{Im} \, M(\xi + i\varepsilon) = \infty \right\}.$$

is a minimal support for the measure τ<sup>s</sup> by Corollary 3.1.8. Note that M- <sup>s</sup> ⊂ M<sup>s</sup> and that

$$m(\mathfrak{M}\_{\mathfrak{s}} \backslash \mathfrak{M}\_{\mathfrak{s}}') \le m(\mathfrak{M}\_{\mathfrak{s}}) = 0,$$

where the last identity follows from Corollary 3.1.7 Therefore, M<sup>s</sup> is a minimal support for τ<sup>s</sup> by Lemma 3.1.1.

Step 3. Here the remaining assertions will be proved. Since M is a Nevanlinna function, the limit value lim<sup>ε</sup> <sup>↓</sup> <sup>0</sup> ε Im M(ξ +iε) exists, is finite, and is nonnegative; cf. (3.1.27) and (3.1.12). This allows to divide the minimal support M<sup>s</sup> of τ<sup>s</sup> into two disjoint subsets,

$$\mathcal{M}\_{\text{sc}} = \left\{ \xi \in \mathfrak{M}\_{\mathfrak{s}} \, : \, \lim\_{\varepsilon \downarrow 0} \varepsilon \, \text{Im} \, M(\xi + i\varepsilon) = 0 \right\}.$$

and

$$\mathcal{M}\_{\mathrm{P}} = \left\{ \xi \in \mathfrak{M}\_{\mathrm{s}} : \lim\_{\varepsilon \downarrow 0} \varepsilon \, \mathrm{Im} \, M(\xi + i\varepsilon) > 0 \right\}.$$

By Corollary 3.1.8 the set

$$\mathcal{M}'\_{\mathrm{sc}} = \left\{ \xi \in \mathfrak{M}'\_{\mathrm{s}} : \lim\_{\varepsilon \downarrow 0} \varepsilon \, \mathrm{Im} \, M(\xi + i\varepsilon) = 0 \right\}$$

is a minimal support for τsc. Since M- sc ⊂ Msc and

$$m(\mathfrak{M}\_{\mathfrak{sc}} \backslash \mathfrak{M}'\_{\mathfrak{sc}}) \le m(\mathfrak{M}\_{\mathfrak{sc}}) \le m(\mathfrak{M}\_{\mathfrak{s}}) = 0$$

by Corollary 3.1.7, also Msc is a minimal support for τsc.

On the other hand, clearly M<sup>p</sup> ⊂ M- <sup>s</sup> and hence

$$\mathcal{M}\_{\mathrm{P}} = \left\{ \xi \in \mathcal{M}\_{\mathrm{s}}' : \lim\_{\varepsilon \downarrow 0} \varepsilon \, \mathrm{Im} \, M(\xi + i\varepsilon) > 0 \right\},$$

which is a minimal support of τ<sup>p</sup> by Corollary 3.1.8.

Finally, the assertion concerning the set M is a consequence of the other proved statements. -

The minimal supports in Proposition 6.7.7 are intimately connected with the spectrum of A0. For the absolutely continuous spectrum one obtains the following result, where the notion of the absolutely continuous closure of a Borel set from Definition 3.2.4 is used. Similar statements (with an inclusion) can be formulated for the singular parts of the spectrum; cf. Section 3.6.

**Theorem 6.7.8.** Let A<sup>0</sup> be the self-adjoint realization of L corresponding to the Dirichlet boundary condition at the regular endpoint a and let Mac be as in Definition 6.7.6. Then

$$
\sigma\_{\rm ac}(A\_0) = \operatorname{clos}\_{\rm ac}(\mathfrak{M}\_{\rm ac}).
$$

Proof. Since the minimal operator Tmin is simple by Proposition 6.4.4, one can apply Theorem 3.6.5 with Δ = R, which yields

$$\sigma\_{\rm ac}(A\_0) = \text{clos}\_{\rm ac} \{ \xi \in \mathbb{R} \, : \, 0 < \lim\_{\varepsilon \downarrow 0} \text{Im} \, M(\xi + i\varepsilon) < \infty \} = \text{clos}\_{\rm ac}(\mathcal{M}'\_{\rm ac});$$

cf. (6.7.13). Since m(M- ac \ Mac) = 0 and m(Mac \ M- ac) = 0 by (6.7.14), it follows from Lemma 3.2.5 that

$$\text{clos}\_{\text{nc}}(\mathcal{M}'\_{\text{nc}}) = \text{clos}\_{\text{nc}}(\mathcal{M}'\_{\text{nc}} \cap \mathcal{M}\_{\text{ac}}) = \text{clos}\_{\text{nc}}(\mathcal{M}\_{\text{ac}}).$$

This leads to the result. -

## **6.8 Semibounded Sturm–Liouville expressions in the regular case**

Let L be the Sturm–Liouville differential expression in (6.1.1),

$$L = \frac{1}{r} \left[ -DpD + q \right], \quad D = d/dx, q$$

and assume that the endpoints a and b are regular, that is, [a, b] is a compact interval and the coefficient functions are real and satisfy

$$\begin{cases} p(x) \neq 0, \ r(x) > 0, & \text{for almost all } x \in (a, b), \\ 1/p, q, r \in L^1(a, b). \end{cases} \tag{6.8.1}$$

Recall from Proposition 6.3.1 that {C2, <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0 f = \begin{pmatrix} f(a) \\ f(b) \end{pmatrix} \quad \text{and} \quad \Gamma\_1 f = \begin{pmatrix} (pf')(a) \\ -(pf')(b) \end{pmatrix}, \quad f \in \text{dom}\, T\_{\text{max}}\,,\tag{6.8.2}
$$

is a boundary triplet for Tmax . In the present section it will be assumed, in addition to (6.8.1), that the sign condition

$$p(x) > 0 \quad \text{for almost all} \quad x \in (a, b) \tag{6.8.3}$$

holds; cf. (6.1.26). This assumption will imply that the minimal operator and all self-adjoint realizations of L in L<sup>2</sup> <sup>r</sup>(a, b) are semibounded from below. The main objective of this section is to provide a characterization of the closed semibounded forms and the corresponding semibounded self-adjoint realizations of L by using the abstract techniques developed in Section 5.6.

In order to apply the results from Section 5.6 a boundary pair will be constructed which is compatible with the boundary triplet in (6.8.2). As a first step, associate with the coefficient functions which satisfy (6.8.1) and (6.8.3) the quadratic form defined by

$$\mathfrak{A}[f,g] = \int\_{a}^{b} \left( (\sqrt{p}f')(x)\overline{(\sqrt{p}g')(x)} + q(x)f(x)\overline{g(x)} \right) dx \tag{6.8.4}$$

for f,g ∈ dom t = D, where

$$\mathfrak{D} = \left\{ f \in L^2\_r(a, b) : f \in AC(a, b), \ \sqrt{p}f' \in L^2(a, b) \right\}.\tag{6.8.5}$$

It turns out that t is densely defined, closed, and semibounded. Hence, there exists a semibounded self-adjoint operator S<sup>1</sup> which corresponds to t and it will be shown that S<sup>1</sup> extends Tmin and that S<sup>1</sup> and the Friedrichs extension S<sup>F</sup> are transversal. The next step is to define the mapping

$$
\Lambda f = \begin{pmatrix} f(a) \\ f(b) \end{pmatrix}, \quad f \in \mathfrak{D} = \text{dom } \mathfrak{t}. \tag{6.8.6}
$$

In Lemma 6.8.4 it will be proved that {C<sup>2</sup>,Λ} is a well-defined boundary pair for Tmin on D corresponding to S<sup>1</sup> that is compatible with the boundary triplet {C<sup>2</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} in (6.8.2). In other words, the mapping Λ is an extension of Γ<sup>0</sup> and the self-adjoint operator S<sup>1</sup> corresponding to the form t on D coincides with the operator A<sup>1</sup> corresponding to Γ1. Therefore, Theorem 5.6.13 and Corollary 5.6.14 can be applied, which leads to Theorem 6.8.5, the main result of this section.

Observe that the definition of the linear space D in (6.8.5) does not involve the potential q in the differential expression L. Some properties concerning D are collected in the following lemma.

**Lemma 6.8.1.** Let [a, b] be a compact interval and let the conditions (6.8.1) and (6.8.3) be satisfied. Then

$$\text{dom}\,T\_{\text{max}} \subset \left\{ f \in AC[a,b] : \, pf' \in AC[a,b] \right\} \subset \mathfrak{D} \subset AC[a,b]. \tag{6.8.7}$$

In particular, D is dense in L<sup>2</sup> <sup>r</sup>(a, b) and for f ∈ D both limits

$$f(a) = \lim\_{x \to a} f(x) \quad \text{and} \quad f(b) = \lim\_{x \to b} f(x) \tag{6.8.8}$$

exist.

Proof. The first inclusion in (6.8.7) is clear; cf. (6.2.1) and the beginning of Section 6.3. To see the second inclusion in (6.8.7), let f ∈ AC[a, b] and pf- ∈ AC[a, b]. In particular, <sup>f</sup> is bounded so that <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b); and pf is bounded so that | √pf- | ≤ C 1/ √p for some positive C. It follows that √pf- <sup>∈</sup> <sup>L</sup>2(a, b) and thus f ∈ D.

To see the third inclusion in (6.8.7), it suffices to show that f ∈ D implies f-<sup>∈</sup> <sup>L</sup>1(a, b). In fact, for <sup>f</sup> <sup>∈</sup> <sup>D</sup> one has

$$\begin{aligned} \int\_a^b |f'(x)| \, dx &= \int\_a^b \frac{1}{\sqrt{p(x)}} |\sqrt{p(x)} f'(x)| \, dx \\ &\le \sqrt{\int\_a^b \frac{1}{p(x)} \, dx} \, \sqrt{\int\_a^b p(x) |f'(x)|^2 \, dx} < \infty \end{aligned}$$

by the Cauchy–Schwarz inequality.

It follows from (6.8.7) that D is a dense subspace of L<sup>2</sup> <sup>r</sup>(a, b), since dom Tmax is dense in L<sup>2</sup> <sup>r</sup>(a, b); cf. Theorem 6.2.1. Furthermore, the limits in (6.8.8) exist for <sup>f</sup> <sup>∈</sup> <sup>D</sup> since <sup>f</sup> <sup>∈</sup> AC[a, b] by (6.8.7). -

To study the properties of the form t in (6.8.4) one uses the decomposition t = r + q, where the forms r and q are defined by

$$\mathfrak{r}[f,g] = \int\_{a}^{b} (\sqrt{p}f')(x) \overline{(\sqrt{pg'})(x)} \, dx, \quad f, g \in \mathfrak{D},\tag{6.8.9}$$

and

$$\mathfrak{q}[f,g] = \int\_{a}^{b} q(x)f(x)\overline{g(x)}\,dx, \quad f,g \in \mathfrak{D},\tag{6.8.10}$$

respectively. It follows directly from (6.8.5) that the form r is well defined. To see that the form <sup>q</sup> is well defined, note that qfg <sup>∈</sup> <sup>L</sup><sup>1</sup>(a, b) since f,g <sup>∈</sup> <sup>D</sup> <sup>⊂</sup> AC[a, b] are bounded. Hence, D is a natural domain of definition for the form t in (6.8.4) for any real potential <sup>q</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup>(a, b). It will be shown that the form <sup>q</sup> is a small perturbation of the form r; cf. Theorem 5.1.16. The properties of the unperturbed form r will be studied first.

Associated with the function p is the first-order differential expression N on the interval (a, b) by Nf = √pf- , which is meaningful for f ∈ AC[a, b]. The differential expression N generates a linear operator R from L<sup>2</sup> <sup>r</sup>(a, b) to L2(a, b) by

$$Rf := Nf, \qquad \text{dom}\, R = \mathfrak{D}.\tag{6.8.11}$$

It is clear that the form r in (6.8.9) and the operator R in (6.8.11) are connected by

$$\mathbf{r}[f,g] = (Rf, Rg)\_{L^2(a,b)}, \quad \text{dom}\,\mathbf{r} = \mathfrak{D}, \tag{6.8.12}$$

so that the form r is closed if and only if R is closed as an operator from L<sup>2</sup> <sup>r</sup>(a, b) to L2(a, b); cf. Lemma 5.1.21.

**Lemma 6.8.2.** Let [a, b] be a compact interval and assume that the conditions (6.8.1) and (6.8.3) are satisfied. Then r in (6.8.9) is a densely defined closed nonnegative form in L<sup>2</sup> <sup>r</sup>(a, b). Moreover, for ε > 0 there exists C<sup>ε</sup> > 0 such that

$$|f(x)|^2 \le C\_\varepsilon \|f\|\_{L^2\_r(a,b)}^2 + \varepsilon \mathbf{r}[f], \qquad x \in [a,b],\tag{6.8.13}$$

holds for all f ∈ D.

Proof. It is clear from (6.8.12) that the form r is nonnegative and densely defined; cf. Lemma 6.8.1. To show (6.8.13), observe that for f ∈ D and x, y ∈ [a, b] one has

$$\begin{split} |f(x)|^2 &\le \left( |f(y)| + |f(x) - f(y)| \right)^2 \\ &\le 2\left( |f(y)|^2 + |f(y) - f(x)|^2 \right) \\ &\le 2\left( |f(y)|^2 + \left| \int\_x^y f'(t) \, dt \right|^2 \right) \\ &\le 2\left( |f(y)|^2 + \int\_x^y p(t) |f'(t)|^2 \, dt \int\_x^y \frac{1}{p(t)} \, dt \right), \end{split} \tag{6.8.14}$$

where the Cauchy–Schwarz inequality and integrability of 1/p were used. Let ε > 0. Due to the absolute continuity of <sup>x</sup> → <sup>x</sup> <sup>a</sup> 1/(p(t)) dt on [a, b], there exist δ > 0 and c<sup>δ</sup> > 0 such that for all x ∈ [a, b] and J(x, δ)=(x − δ, x + δ) ∩ [a, b] one has

$$\left(\int\_{J(x,\delta)} \frac{1}{p(t)} \, dt\right) \le \frac{\varepsilon}{2} \quad \text{and} \quad \int\_{J(x,\delta)} r(y) \, dy \ge c\_{\delta}.\tag{6.8.15}$$

For x ∈ [a, b] and the corresponding interval J(x, δ) one observes from (6.8.14) that

$$\begin{split} |f(x)|^2 \int\_{J(x,\delta)} r(y) \, dy &= \int\_{J(x,\delta)} |f(x)|^2 r(y) \, dy \\ &\le 2 \int\_{J(x,\delta)} |f(y)|^2 r(y) \, dy \\ &+ 2 \int\_{J(x,\delta)} \left( \int\_x^y p(t) |f'(t)|^2 \, dt \int\_x^y \frac{1}{p(t)} \, dt \right) r(y) \, dy. \end{split} \tag{6.8.16}$$

The first term on the right-hand side can be estimated by 2 f 2 L<sup>2</sup> r(a,b). For the second term on the right-hand side note that

$$\begin{aligned} &2\int\_{J(x,\delta)} \left(\int\_x^y p(t)|f'(t)|^2 \, dt \, \int\_x^y \frac{1}{p(t)} \, dt\right) r(y) \, dy \\ & \qquad \le 2\left(\int\_{J(x,\delta)} \frac{1}{p(t)} \, dt\right) \int\_{J(x,\delta)} \left(\int\_x^y p(t)|f'(t)|^2 \, dt\right) r(y) \, dy \\ & \qquad \le \varepsilon \left(\int\_a^b p(t)|f'(t)|^2 \, dt\right) \int\_{J(x,\delta)} r(y) \, dy. \end{aligned}$$

Use this with the estimate in (6.8.16), divide by <sup>J</sup>(x,δ) r(y) dy, and use (6.8.15) to conclude (6.8.13) with C<sup>ε</sup> = 2c−<sup>1</sup> <sup>δ</sup> for x ∈ [a, b].

To verify that the form r is closed, it suffices by (6.8.12) to prove that the operator R in (6.8.11) is closed; cf. Lemma 5.1.21. Let f<sup>n</sup> ∈ dom R = D and assume that <sup>f</sup><sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) for some in <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and that Rf<sup>n</sup> <sup>=</sup> <sup>√</sup>pf- <sup>n</sup> → g in <sup>L</sup>2(a, b) for some <sup>g</sup> <sup>∈</sup> <sup>L</sup>2(a, b). It will be shown that <sup>f</sup> <sup>∈</sup> <sup>D</sup> and <sup>g</sup> <sup>=</sup> Rf. In fact, the sequence <sup>f</sup>n(a) converges by (6.8.13) to some <sup>α</sup> <sup>∈</sup> <sup>C</sup>. Next define the function h on [a, b] by

$$h(x) := \alpha + \int\_{a}^{t} \frac{g(t)}{\sqrt{p(t)}} \, dt.$$

Since 1/p is integrable, the Cauchy–Schwarz inequality shows that

$$\|g/\sqrt{p}\|\_{L^1(a,b)} \le \|p^{-1}\|\_{L^1(a,b)}^{1/2} \|g\|\_{L^2(a,b)}\cdot$$

Thus, it follows that h is well defined and absolutely continuous on [a, b] and that √ph-= g almost everywhere on [a, b]. Furthermore, for a ≤ x ≤ b one obtains

$$|f\_n(x) - h(x)| \le |f\_n(a) - \alpha| + \left| \int\_a^x f\_n'(t) \, dt - \int\_a^x \frac{g(t)}{\sqrt{p(t)}} \, dt \right|$$

$$\le |f\_n(a) - \alpha| + \int\_a^x \frac{1}{\sqrt{p(t)}} |\sqrt{p(t)} f\_n'(t) - g(t)| \, dt$$

$$\begin{aligned} &\leq |f\_n(a) - \alpha| + \left( \int\_a^x |\sqrt{p(t)} f\_n'(t) - g(t)|^2 \, dt \right)^{1/2} \left( \int\_a^x \frac{1}{p(t)} \, dt \right)^{1/2} \\ &\leq |f\_n(a) - \alpha| + \|Rf\_n - g\|\_{L^2(a,b)} \|p^{-1}\|\_{L^1(a,b)}^{1/2} \to 0 \end{aligned}$$

as n → ∞. Thus, f<sup>n</sup> → h uniformly on [a, b]. On the other hand, f<sup>n</sup> → f in L2 <sup>r</sup>(a, b). Therefore, f = h on [a, b] and

$$
\sqrt{p(x)}f'(x) = \sqrt{p(x)}h'(x) = g(x), \quad x \in [a, b].
$$

Hence, one concludes <sup>f</sup> <sup>∈</sup> <sup>D</sup> and Rf <sup>=</sup> <sup>g</sup>. -

For another appearance of the above lemma in a slightly more general setting, see Lemma 6.9.1. Now the main result about the perturbed form t will be given.

**Lemma 6.8.3.** Let [a, b] be a compact interval and assume that the conditions (6.8.1) and (6.8.3) are satisfied. Then t in (6.8.4) is a densely defined closed semibounded form in L<sup>2</sup> <sup>r</sup>(a, b) and the corresponding semibounded self-adjoint operator S<sup>1</sup> is an extension of Tmin . Moreover, for ε > 0 there exists C<sup>ε</sup> > 0 such that

$$|f(x)|^2 \le C\_\varepsilon \|f\|\_{L^2\_r(a,b)}^2 + \varepsilon \mathbf{t}[f], \qquad x \in [a,b],\tag{6.8.17}$$

holds for all f ∈ D.

Proof. Recall the decomposition t = r + q where r and q are defined in (6.8.9) and (6.8.10). According to Lemma 6.8.2, the form r is nonnegative and closed in L2 <sup>r</sup>(a, b). Moreover, the form q is a small perturbation of r. Indeed, f ∈ D implies that f ∈ AC[a, b], and hence

$$|\mathfrak{q}[f]| = \left| \int\_{a}^{b} q(x) |f(x)|^{2} \, dx \right| \le \sup\_{x \in [a,b]} |f(x)|^{2} \int\_{a}^{b} |q(x)| \, dx.$$

Therefore, with ε > 0 and C<sup>ε</sup> > 0 as in Lemma 6.8.2, one concludes that

$$|\mathfrak{q}[f]| \le C\_{\varepsilon} \|q\|\_{L^1(a,b)} \|f\|\_{L^2\_r(a,b)}^2 + \varepsilon \|q\|\_{L^1(a,b)} \mathfrak{r}[f],$$

and it follows that q is form-bounded with respect to r and the form bound is arbitrarily small. Now Theorem 5.1.16 implies that t = r + q is a semibounded closed form in L<sup>2</sup> <sup>r</sup>(a, b). Since Tmin is equal to ker Γ<sup>0</sup> ∩ ker Γ1, integration by parts shows that

$$(T\_{\min}f,g)\_{L^{2}\_{r}(a,b)} = \mathfrak{t}[f,g], \qquad f \in \text{dom}\,T\_{\min}, \ g \in \mathfrak{D},$$

and now the first representation theorem implies that Tmin ⊂ S1. In particular, Tmin is semibounded from below.

In order to show (6.8.17), decompose q as q = q<sup>+</sup> − q<sup>−</sup> into its positive part q<sup>+</sup> = max {q, 0} and its negative part q<sup>−</sup> = max {−q, 0}. Observe that for f ∈ D

$$\mathfrak{q}[f] = \int\_{a}^{b} q(x)|f(x)|^{2} \, dx \ge -\int\_{a}^{b} q\_{-}(x)|f(x)|^{2} \, dx\tag{6.8.18}$$

and also that

$$\int\_{a}^{b} q\_{-}(x) |f(x)|^{2} \, dx \le \sup\_{x \in [a, b]} |f(x)|^{2} \int\_{a}^{b} q\_{-}(x) \, dx. \tag{6.8.19}$$

By Lemma 6.8.2 one sees that for every δ > 0 there exists C<sup>δ</sup> such that

$$\int\_a^b q\_{-}(x)|f(x)|^2 \, dx \le C\_\delta \|q\_{-}\|\_{L^1(a,b)} \|f\|\_{L^2\_r(a,b)}^2 + \delta \|q\_{-}\|\_{L^1(a,b)} \mathfrak{r}[f].$$

Hence, it follows from (6.8.18), (6.8.19), and t = r + q that

$$\mathfrak{a}[f] \ge \left(1 - \delta \|q\_{-}\|\_{L^{1}(a,b)}\right) \mathfrak{r}[f] - C\_{\delta} \|q\_{-}\|\_{L^{1}(a,b)} \|f\|\_{L^{2}\_{r}(a,b)}^{2}.$$

If δ > 0 is sufficiently small this means that there exist constants α, β > 0 such that

$$\mathfrak{r}[f] \le \alpha \mathfrak{t}[f] + \beta \|f\|\_{L^2\_r(a,b)}^2.$$

A further application of Lemma 6.8.2 yields the desired result. -

Now the theory developed in Chapter 5 concerning boundary pairs will be applied to the semibounded minimal operator Tmin . Recall the definition of the mapping in (6.8.6). A preparation for the main result is Lemma 6.8.4 below, which is based on Lemma 5.6.5.

**Lemma 6.8.4.** Let [a, b] be a compact interval, assume that the conditions (6.8.1) and (6.8.3) are satisfied, and let {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in (6.8.2). Then {C2,Λ} in (6.8.6) is a boundary pair for <sup>T</sup>min corresponding to <sup>S</sup><sup>1</sup> which is compatible with the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1}. Moreover, one has

$$(T\_{\max}f,g)\_{L^{2}\_{r}(a,b)} = (\Gamma\_{1}f,\Lambda g) + \mathfrak{t}[f,g], \quad f \in \text{dom}\,T\_{\max}, \ g \in \mathfrak{D}.\tag{6.8.20}$$

Proof. Consider the form t in (6.8.4) defined on dom t = D and let S<sup>1</sup> be the corresponding semibounded self-adjoint operator in L<sup>2</sup> <sup>r</sup>(a, b). By Lemma 6.8.1, dom Tmax ⊂ D and hence Λ in (6.8.6) is an extension of the boundary mapping Γ<sup>0</sup> in (6.8.2). Integration by parts shows that

$$\begin{split} (T\_{\text{max}}f,g)\_{L^{2}\_{r}(a,b)} &= \int\_{a}^{b} \left(-(pf')'(x) + q(x)f(x)\right) \overline{g(x)} \, dx \\ &= (pf')(a)\overline{g(a)} - (pf')(b)\overline{g(b)} + \mathfrak{t}[f,g] \end{split} \tag{6.8.21}$$

for f ∈ dom Tmax ⊂ D and g ∈ D. This yields (6.8.20). It also follows from (6.8.21) that

$$(A\_1 f, g)\_{L^2\_r(a, b)} = \mathfrak{t}[f, g]\_r$$

for f ∈ dom A<sup>1</sup> = ker Γ<sup>1</sup> and g ∈ D, and the first representation theorem implies that A<sup>1</sup> = S1. Let ε > 0 and C<sup>ε</sup> > 0 be as in Lemma 6.8.3. It follows from the estimate (6.8.17) that for ρ<m(S1) there exists Cρ,ε > 0 such that

$$\|\|\Lambda f\|\|\_{\mathbb{C}^2}^2 = |f(a)|^2 + |f(b)|^2 \le 2C\_\varepsilon \|f\|\|\_{L^2\_r(a,b)}^2 + 2\varepsilon \mathfrak{t}[f] \le C\_{\rho,\varepsilon} \|f\|\_{\mathfrak{t}\_{\mathcal{S}\_1}-\rho}^2$$

for all <sup>f</sup> <sup>∈</sup> <sup>D</sup>. Therefore, Λ <sup>∈</sup> **<sup>B</sup>**(HtS1−ρ, <sup>C</sup>2); recall that the Hilbert space <sup>H</sup>tS1−<sup>ρ</sup> was defined above Lemma 5.1.3. Now Lemma 5.6.5 implies that {C2,Λ} is a boundary pair for <sup>T</sup>min and since <sup>A</sup><sup>1</sup> <sup>=</sup> <sup>S</sup><sup>1</sup> one also sees that {C2,Λ} and {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} are compatible. -

Recall that by means of the boundary triplet in (6.8.2) all the self-adjoint extensions of Tmin are in a one-to-one correspondence to the self-adjoint relations Θ in C<sup>2</sup> via

$$\operatorname{dom} A\_{\Theta} = \left\{ f \in \operatorname{dom} T\_{\max} \, : \, \{ \Gamma\_0 f, \Gamma\_1 f \} \in \Theta \right\}. \tag{6.8.22}$$

The next result, which is an immediate consequence of Theorem 5.6.13 and Corollary 5.6.14, makes use of the compatible boundary pair in Lemma 6.8.4 and provides a characterization of all closed semibounded forms associated with the semibounded self-adjoint extensions AΘ.

**Theorem 6.8.5.** Let {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in (6.8.2), let <sup>Θ</sup> be a selfadjoint relation in C2, and let A<sup>Θ</sup> be the corresponding self-adjoint restriction of Tmax in (6.8.22). Then A<sup>Θ</sup> is semibounded from below and the corresponding densely defined closed semibounded form t<sup>Θ</sup> in L<sup>2</sup> <sup>r</sup>(a, b) such that

$$(A\_{\Theta}f,g)\_{L^{2}\_{r}(a,b)} = \mathfrak{t}\_{\Theta}[f,g], \quad f \in \text{dom}\,A\_{\Theta}, \ g \in \text{dom}\,\mathfrak{t}\_{\Theta},$$

is given as follows:

(i) If Θ is a symmetric 2 × 2-matrix, then

$$\mathsf{t}\models[f,g] = \mathsf{t}[f,g] + \left(\Theta\begin{pmatrix}f(a)\\f(b)\end{pmatrix}, \begin{pmatrix}g(a)\\g(b)\end{pmatrix}\right), \quad \mathsf{dom}\,\mathsf{t}\models\mathcal{O} = \mathfrak{D}.$$

(ii) If Θ=Θop <sup>⊕</sup> <sup>Θ</sup>mul with respect to the decomposition <sup>C</sup><sup>2</sup> = dom Θop <sup>⊕</sup> mul Θ and dim dom Θop = 1, then

$$\begin{aligned} \mathsf{t}\_{\Theta}[f,g] &= \mathsf{t}[f,g] + \Theta\_{\mathrm{op}}\left(f(a)g(a) + f(b)g(b)\right), \\ \mathsf{dom}\,\mathsf{t}\_{\Theta} &= \left\{ h \in \mathfrak{D} : \begin{pmatrix} h(a) \\ h(b) \end{pmatrix} \in \mathrm{dom}\,\Theta\_{\mathrm{op}} \right\}. \end{aligned}$$

(iii) If Θ = {0} × <sup>C</sup><sup>2</sup>, then <sup>A</sup><sup>Θ</sup> <sup>=</sup> <sup>A</sup><sup>0</sup> coincides with the Friedrichs extension <sup>S</sup><sup>F</sup> and

> tΘ[f,g] = t[f,g], dom t<sup>Θ</sup> = - h ∈ D : h(a) = h(b)=0 .

Theorem 6.8.5 has an immediate corollary for the form tΘ, which is the analog of Lemma 6.8.3 for the form t.

**Corollary 6.8.6.** Let [a, b] be a compact interval and let the conditions (6.8.1) and (6.8.3) be satisfied. Let Θ be a self-adjoint relation in C<sup>2</sup> and let the form t<sup>Θ</sup> be as in Theorem 6.8.5. Then for ε > 0 there exists C<sup>ε</sup> > 0 such that

$$|f(x)|^2 \le C\_\varepsilon \|f\|\_{L^2\_r(a,b)}^2 + \varepsilon \mathfrak{t}\_\Theta[f], \qquad x \in [a,b],$$

holds for all f ∈ dom tΘ.

Proof. Let t<sup>Θ</sup> be given as in Theorem 6.8.5 (i); the case in Theorem 6.8.5 (ii) is treated in the same way and for t<sup>Θ</sup> in Theorem 6.8.5 (iii) the result is clear from Lemma 6.8.3. Let μ(Θ) be the smallest eigenvalue of the symmetric 2 × 2 matrix Θ and define <sup>μ</sup> <sup>∈</sup> <sup>R</sup> by

$$\mu = \begin{cases} 0, & \mu(\Theta) \ge 0, \\ |\mu(\Theta)|, & \mu(\Theta) < 0. \end{cases}$$

Then (ΘΛf,Λf)C<sup>2</sup> ≥ −μ(|f(a)| <sup>2</sup> <sup>+</sup> <sup>|</sup>f(b)<sup>|</sup> <sup>2</sup>) for all <sup>f</sup> <sup>∈</sup> <sup>D</sup> = dom <sup>t</sup><sup>Θ</sup> and using Lemma 6.8.3 one concludes that for 0 < ε- < 1 there exists Cε-> 0 such that

$$(\Theta \Lambda f, \Lambda f)\_{\mathbb{C}^2} \ge -C\_{\varepsilon'} \|f\|\_{L^2\_r(a,b)}^2 - \varepsilon' \mathfrak{t}[f], \qquad f \in \mathfrak{D}.$$

The inequality

$$\mathfrak{t}\_{\Theta}[f] = \mathfrak{t}[f] + (\Theta \Lambda f, \Lambda f)\_{\mathbb{C}^2} \ge (1 - \varepsilon')\mathfrak{t}[f] - C\_{\varepsilon'} \|f\|\_{L^2\_r(a,b)}^2$$

shows that

$$\mathfrak{t}[f] \le \frac{C\_{\varepsilon'}}{1 - \varepsilon'} \|f\|\_{L^2\_r(a,b)}^2 + \frac{1}{1 - \varepsilon'} \mathfrak{t}\_\Theta[f], \qquad f \in \mathfrak{D},$$

and another application of Lemma 6.8.3 completes the proof. -

Finally, the Kre˘ın type extensions SK,x from Definition 5.4.2 are provided for x<m(SF). Recall from Proposition 6.3.1 that

$$M(x) = \frac{1}{u\_2(b,x)} \begin{pmatrix} -u\_1(b,x) & 1\\ 1 & -(pu\_2')(b,x) \end{pmatrix}.$$

Hence, it follows from Theorem 5.5.1 that

$$\operatorname{dom} S\_{\mathcal{K},x} = \left\{ f \in \operatorname{dom} T\_{\max} : M(x) \begin{pmatrix} f(a) \\ f(b) \end{pmatrix} = \begin{pmatrix} (pf')(a) \\ -(pf')(b) \end{pmatrix} \right\} \dots$$

Note that the semibounded self-adjoint extensions of Tmin in L<sup>2</sup> <sup>r</sup>(a, b) with lower bound x<m(SF) are precisely those self-adjoint extensions A<sup>Θ</sup> that satisfy the inequalities SK,x ≤ A<sup>Θ</sup> ≤ SF. These extensions and the corresponding closed semibounded forms can now be described explicitly using the results in Section 5.6. In particular, if m(SF) > 0, then the Kre˘ın–von Neumann extension SK,<sup>0</sup> is defined on

$$\operatorname{dom} S\_{\mathcal{K},0} = \left\{ f \in \operatorname{dom} T\_{\max} : M(0) \begin{pmatrix} f(a) \\ f(b) \end{pmatrix} = \begin{pmatrix} (pf')(a) \\ -(pf')(b) \end{pmatrix} \right\}$$

and Corollary 5.6.18 provides a one-to-one correspondence between all closed nonnegative forms corresponding to nonnegative self-adjoint extension A<sup>Θ</sup> of Tmin and all closed nonnegative forms corresponding to nonnegative self-adjoint relations Θ in C2.

## **6.9 Closed semibounded forms for Sturm–Liouville equations**

Let L be the Sturm–Liouville differential expression given by (6.1.1) on the interval (a, b). It will be assumed that the coefficient functions satisfy the conditions in (6.1.2) and, in addition, that

$$p(x) > 0 \quad \text{for almost all } x \in (a, b). \tag{6.9.1}$$

In Section 6.8 it was assumed that L is regular at the endpoints, in which case the minimal operator Tmin is semibounded from below and the form in (6.8.4) and the mapping in (6.8.6) give a boundary pair compatible with the boundary triplet in (6.8.2). The interest is now in the construction of a corresponding form when L is not necessarily regular at the endpoints, which implies that (6.8.4) is not adequate anymore. The key to defining an appropriate form in the general case is the condition (6.9.1) together with the assumption that there are nonoscillatory solutions of the Sturm–Liouville equation (L−λ0)<sup>y</sup> = 0 for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>. It will be shown that these assumptions imply that Tmin is semibounded from below. The main result in this section is Theorem 6.9.6. In Section 6.10 the properties of the nonoscillatory solutions will be further investigated.

The differential equation (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> is said to be nonoscillatory at an endpoint a or b, if it has a real solution u whose zeros do not accumulate at a or b, respectively. Otherwise, the equation (L − λ0)y = 0 is called oscillatory. If this is the case, then the zeros of any nontrivial real solution do not accumulate at that endpoint; cf. Lemma 6.1.8. Furthermore, Lemma 6.1.8 also implies that in this case the equation (L−λ- <sup>0</sup>)y = 0 with λ- <sup>0</sup> < λ<sup>0</sup> is nonoscillatory. If the equation (L − λ0)y = 0 is nonoscillatory at both endpoints, then there exist real solutions of this equation which do not vanish in neighborhoods of a and b, respectively. As a preparation for the general case there is first a closer look at the situation

where the equation (L − λ0)y = 0 has a real solution which does not vanish on a subinterval.

Assume that <sup>φ</sup> is a real solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> which does not vanish on the subinterval (α, β) ⊂ (a, b). Use φ to introduce the first-order differential expression N<sup>φ</sup>

$$N\_{\phi}f = \sqrt{p}\phi \left(\frac{f}{\phi}\right)'\tag{6.9.2}$$

for all functions f ∈ AC(α, β). The differential expression N<sup>φ</sup> in (6.9.2) generates an operator R<sup>φ</sup> from L<sup>2</sup> <sup>r</sup>(α, β) to L2(α, β) defined on the linear subspace

$$\mathfrak{D}\_{\phi} = \left\{ f \in L\_r^2(\alpha, \beta) \, : \, f \in AC(\alpha, \beta), \, N\_{\phi}f \in L^2(\alpha, \beta) \right\}.$$

by

$$R\_{\phi}f = N\_{\phi}f, \quad \text{dom}\, R\_{\phi} = \mathfrak{D}\_{\phi}.\tag{6.9.3}$$

In the special case q = 0, λ<sup>0</sup> = 0, and φ = 1, the corresponding differential expression in (6.9.2) reduces to Nφf = √pf- , which played an important role in the proof of Lemma 6.8.2. The next lemma is an analog of Lemma 6.8.2. The interval (α, β) below is a possibly unbounded subinterval of (a, b) for which α = a or β = b is allowed.

**Lemma 6.9.1.** Let <sup>φ</sup> be a real solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which does not vanish on a subinterval (α, β) <sup>⊂</sup> (a, b). Then the operator <sup>R</sup><sup>φ</sup> from <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β) to L2(α, β) defined in (6.9.3) is closed. The associated form r<sup>φ</sup> in L<sup>2</sup> <sup>r</sup>(α, β) defined by

rφ[f,g]=(Rφf,Rφg)L2(α,β), f,g ∈ dom r<sup>φ</sup> = dom R<sup>φ</sup> = Dφ,

is nonnegative and closed.

Proof. The proof will be given in three steps. Observe that the functions rφ<sup>2</sup> and pφ<sup>2</sup> satisfy the integrability conditions

$$r\phi^2 \in L\_{\text{loc}}^1\left(\alpha, \beta\right), \quad \frac{1}{p\phi^2} \in L\_{\text{loc}}^1\left(\alpha, \beta\right),\tag{6.9.4}$$

while rφ<sup>2</sup> and pφ<sup>2</sup> are positive almost everywhere on (α, β).

Step 1. First take q = 0, λ<sup>0</sup> = 0, and φ = 1, in which case Nφf = √pf- . Then the associated operator R<sup>φ</sup> from L<sup>2</sup> <sup>r</sup>(α, β) to L2(α, β) is closed. To see this, let <sup>f</sup><sup>n</sup> <sup>∈</sup> <sup>D</sup><sup>φ</sup> be such that <sup>f</sup><sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β) and <sup>R</sup>φf<sup>n</sup> <sup>→</sup> <sup>g</sup> in <sup>L</sup>2(α, β). Then clearly for every compact subinterval [α- , β- ] <sup>⊂</sup> (α, β) one has that <sup>f</sup><sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>L</sup><sup>2</sup> r(α- , β- ) and <sup>R</sup>φf<sup>n</sup> <sup>→</sup> <sup>g</sup> in <sup>L</sup>2(α- , β- ). By Lemma 6.8.2, this implies that on [α- , β- ] one has f ∈ AC[α- , β- ] and g = √pf- . Since [α- , β- ] is arbitrary, one concludes that <sup>f</sup> <sup>∈</sup> AC(α, β) and <sup>g</sup> <sup>=</sup> <sup>√</sup>pf-<sup>∈</sup> <sup>L</sup>2(α, β). In other words, <sup>f</sup> <sup>∈</sup> <sup>D</sup><sup>φ</sup> and <sup>g</sup> <sup>=</sup> <sup>R</sup>φf.

Step 2. Introduce the Hilbert space L<sup>2</sup> rφ<sup>2</sup> (α, β) and the linear space Dby

$$\mathfrak{D}' = \left\{ f \in L^2\_{r\phi^2}(\alpha, \beta) : f \in AC(\alpha, \beta), \ \sqrt{p}\phi f' \in L^2(\alpha, \beta) \right\}.$$

Note that

$$f \in L\_r^2(\alpha, \beta) \quad \Leftrightarrow \quad \frac{f}{\phi} \in L\_{r\phi^2}^2(\alpha, \beta),$$

and

$$f \in \mathfrak{D}\_{\phi} \quad \Leftrightarrow \quad \frac{f}{\phi} \in \mathfrak{D}'.$$

Step 3. The operator R<sup>φ</sup> from L<sup>2</sup> <sup>r</sup>(α, β) to <sup>L</sup>2(α, β) is closed. Indeed, let <sup>f</sup><sup>n</sup> <sup>∈</sup> <sup>D</sup><sup>φ</sup> be such that <sup>f</sup><sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β) and <sup>R</sup>φf<sup>n</sup> <sup>→</sup> <sup>g</sup> in <sup>L</sup>2(α, β). Then it is clear that

$$\frac{f\_n}{\phi} \in \mathfrak{D}', \quad \frac{f\_n}{\phi} \to \frac{f}{\phi} \quad \text{in} \quad L^2\_{r\phi^2}(\alpha, \beta),$$

while also

$$
\sqrt{p}\phi\left(\frac{f\_n}{\phi}\right)' \to g \quad \text{in} \quad L^2(\alpha, \beta).
$$

Then Step 1, applied with p replaced by pφ<sup>2</sup> and r replaced by rφ<sup>2</sup> (and taking into account the integrability conditions (6.9.4)) shows that

$$\frac{f}{\phi} \in \mathfrak{D}' \quad \text{and} \quad \sqrt{p}\phi \left(\frac{f}{\phi}\right)' = g.$$

Thus, by Step 2 one obtains f ∈ D<sup>φ</sup> and g = Rφf. Therefore, the operator R<sup>φ</sup> is closed, as claimed. As a consequence, the associated nonnegative form r<sup>φ</sup> is closed; cf. Lemma 5.1.21. -

The differential expression N<sup>φ</sup> appears naturally when one considers the following form of the first Green identity for the differential expression L.

**Lemma 6.9.2.** Let <sup>φ</sup> be a real solution of (L−λ0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which does not vanish on a subinterval (α, β) ⊂ (a, b). Assume that f, pf- , g ∈ AC(α, β). Then for any compact subinterval [α- , β- ] ⊂ (α, β) one has

$$\begin{split} \int\_{\alpha^{\prime}}^{\beta^{\prime}} (Lf)(x) \overline{g(x)} r(x) \, dx &= W\_{x}(f, \phi) \left( \frac{\overline{g}}{\phi} \right)(x) \Big|\_{\alpha^{\prime}}^{\beta^{\prime}} \\ &+ \int\_{\alpha^{\prime}}^{\beta^{\prime}} (N\_{\phi} f)(x) \overline{(N\_{\phi} g)(x)} \, dx + \lambda\_{0} \int\_{\alpha^{\prime}}^{\beta^{\prime}} f(x) \overline{g(x)} r(x) \, dx. \end{split} \tag{6.9.5}$$

Proof. Since f, pf- ∈ AC(α, β), it follows from the definition of the Wronskian that

$$\left( (L - \lambda\_0) f \right)(x) r(x) \phi(x) = \frac{d}{dx} W\_x(f, \phi);$$

cf. (6.1.9). Multiply this identity by g(x)/φ(x); then integration by parts gives for any compact subinterval [α- , β- ] ⊂ (α, β) that

$$\begin{aligned} &\int\_{\alpha'}^{\beta'} ((L-\lambda\_0)f)(x) \overline{g(x)} r(x) \, dx \\ & \qquad = \int\_{\alpha'}^{\beta'} (W\_x(f,\phi))'(x) \overline{\frac{g(x)}{\phi(x)}} \, dx \\ & \qquad = W\_x(f,\phi) \left(\frac{\overline{g}}{\phi}\right)(x) \Big|\_{\alpha'}^{\beta'} - \int\_{\alpha'}^{\beta'} W\_x(f,\phi) \frac{d}{dx} \left(\overline{\frac{g(x)}{\phi(x)}}\right) dx .\end{aligned}$$

Now observe that the Wronskian Wx(f,φ) can be written in terms of the differential expressions Nφf in (6.9.2) as

$$W\_x(f, \phi) = -p\phi^2 \left(\frac{f}{\phi}\right)' = -\sqrt{p}\phi(N\_\phi f),$$

and so

$$\begin{aligned} \cdots & \cdots\\ & -\int\_{\alpha'}^{\beta'} W\_x(f, \phi) \frac{d}{dx} \left( \overline{\frac{g(x)}{\phi(x)}} \right) dx = \int\_{\alpha'}^{\beta'} (N\_\phi f)(x) \overline{(N\_\phi g)(x)} \, dx. \end{aligned}$$
  $\text{Hence, the result in (6.9.5) follows.}$ 

In the first Green formula (6.9.5) there is an interplay between the outintegrated parts and the integrals involving the differential expression Nφ. In fact, assume, in addition, that f, Lf <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(α, β) and g = f in (6.9.5), then the limits

$$\lim\_{x \to \alpha} W\_x(f, \phi) \left( \frac{\overline{f}}{\phi} \right)(x) \quad \text{or} \quad \lim\_{x \to \beta} W\_x(f, \phi) \left( \frac{\overline{f}}{\phi} \right)(x), \tag{6.9.6}$$

exist in <sup>C</sup> if and only if <sup>N</sup>φ<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(α, c) or <sup>N</sup>φ<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(c, β), respectively, since the corresponding limit on the left-hand side and the limit of the third term on the right-hand side in (6.9.5) exist.

Note that the integral terms on the right-hand side of (6.9.5) make sense for f,g ∈ AC(α, β). In the next lemma these terms are rewritten using the form in (6.8.4).

**Lemma 6.9.3.** Let <sup>φ</sup> be a real solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which does not vanish on a subinterval (α, β) ⊂ (a, b). Assume that f,g ∈ AC(α, β). Then for any compact subinterval [α- , β- ] ⊂ (α, β) one has

$$\begin{split} \int\_{\alpha'}^{\beta'} (N\_{\phi}f)(x) \overline{(N\_{\phi}g)(x)} \, dx + \lambda\_0 \int\_{\alpha'}^{\beta'} f(x) \overline{g(x)} r(x) \, dx \\ = \int\_{\alpha'}^{\beta'} \{ (\sqrt{p}f')(x) \overline{(\sqrt{p}g')(x)} + q(x)f(x) \overline{g(x)} \} \, dx \\ - \frac{(p\phi')(\beta')}{\phi(\beta')} f(\beta') \overline{g(\beta')} + \frac{(p\phi')(\alpha')}{\phi(\alpha')} f(\alpha') \overline{g(\alpha')}. \end{split} \tag{6.9.7}$$

Proof. Let f,g ∈ AC(α, β). Then

$$N\_{\phi}f\overline{N\_{\phi}g} = pf'\overline{g}' - (f\overline{g})'\frac{p\phi'}{\phi} + f\overline{g}\,p\left(\frac{\phi'}{\phi}\right)^2. \tag{6.9.8}$$

Observe that

$$\begin{aligned} -\int\_{\alpha'}^{\beta'} (f\overline{g})'(x) \frac{(p\phi')(x)}{\phi(x)} \, dx &= \int\_{\alpha'}^{\beta'} (f\overline{g})(x) \left(\frac{p\phi'}{\phi}\right)'(x) \, dx \\ -\frac{(p\phi')(\beta')}{\phi(\beta')} f(\beta') \overline{g(\beta')} &+ \frac{(p\phi')(\alpha')}{\phi(\alpha')} f(\alpha') \overline{g(\alpha')}, \end{aligned}$$

and that

$$\left(\frac{p\phi'}{\phi}\right)' + p\left(\frac{\phi'}{\phi}\right)^2 = \frac{(p\phi')'}{\phi} = q - \lambda\_0 r.$$

Now integration of (6.9.8) leads to the desired result. -

After this excursion to the case of a nonvanishing solution of the equation (L − λ0)y = 0 on an arbitrary open subinterval (α, β) ⊂ (a, b), one returns to the general situation. Let L be the Sturm–Liouville expression (6.1.1) on the interval (a, b) and let the coefficient functions satisfy the conditions (6.1.2) and (6.9.1). Assume that there exist λ<sup>a</sup> <sup>0</sup> <sup>∈</sup> <sup>R</sup> and <sup>λ</sup><sup>b</sup> <sup>0</sup> <sup>∈</sup> <sup>R</sup> for which the equations (L−λ<sup>a</sup> <sup>0</sup>)y = 0 and (<sup>L</sup> <sup>−</sup> <sup>λ</sup><sup>b</sup> <sup>0</sup>)y = 0 are nonoscillatory at a and b, respectively. Then Lemma 6.1.8 implies that for <sup>λ</sup><sup>0</sup> <sup>≤</sup> min {λ<sup>a</sup> 0, λ<sup>b</sup> <sup>0</sup>} the equation (L − λ0)y = 0 is nonoscillatory at a and b. Thus, this equation has real solutions φ<sup>a</sup> and φ<sup>b</sup> which do not vanish on (a, a0) and on (b0, b), respectively. Denote the corresponding first-order differential expressions by Nφ<sup>a</sup> and Nφ<sup>b</sup> ; cf. (6.9.2). To define a form associated with the differential expression L by means of Nφ<sup>a</sup> and Nφ<sup>b</sup> , let [c, d] ⊂ (a, b) be a compact interval such that

$$a < c < a\_0 < b\_0 < d < b. \tag{6.9.9}$$

Define the linear subspace <sup>D</sup> <sup>⊂</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) by

D = - <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) : f ∈ AC(a, b), √pf- <sup>∈</sup> <sup>L</sup>2(c, d), <sup>N</sup>φ<sup>a</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, c), Nφ<sup>b</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(d, b) , (6.9.10)

and define the form t by

$$\begin{split} 4[f,g] &= \int\_{a}^{c} (N\_{\phi\_{a}}f)(x) \overline{(N\_{\phi\_{a}}g)(x)} \, dx + \int\_{d}^{b} (N\_{\phi\_{b}}f)(x) \overline{(N\_{\phi\_{b}}g)(x)} \, dx \\ &\quad + \lambda\_{0} \int\_{a}^{c} f(x) \overline{g(x)} r(x) \, dx + \lambda\_{0} \int\_{d}^{b} f(x) \overline{g(x)} r(x) \, dx \\ &\quad + \int\_{c}^{d} \left( (\sqrt{p}f')(x) \overline{(\sqrt{p}g')(x)} + q(x)f(x) \overline{g(x)} \right) dx \\ &\quad + \frac{(p\phi\_{a}')(c)}{\phi\_{a}(c)} f(c) \overline{g(c)} - \frac{(p\phi\_{b}')(d)}{\phi\_{b}(d)} f(d) \overline{g(d)}, \end{split} \tag{6.9.11}$$

where f,g ∈ D. Observe that here

$$\frac{(p\phi\_a')(c)}{\phi\_a(c)} \in \mathbb{R} \quad \text{and} \quad \frac{(p\phi\_b')(d)}{\phi\_b(d)} \in \mathbb{R}.\tag{6.9.12}$$

The basic properties of t and its domain D in (6.9.10) and (6.9.11) will be shown in the following lemma. It is clear that t and D depend on the choice of the nonoscillatory solutions φ<sup>a</sup> and φb. However, they do not depend on the particular choice of the points c and d in (6.9.9).

**Lemma 6.9.4.** Assume that φ<sup>a</sup> and φ<sup>b</sup> are real nonoscillatory solutions of the equation (L−λ0)y = 0 which do not vanish on (a, a0) and on (b0, b), respectively. Then the form t and its domain D in (6.9.10) and (6.9.11) do not depend on the particular choice of the points c<d in (6.9.9). Moreover, the form t is closed and bounded from below in L<sup>2</sup> <sup>r</sup>(a, b).

Proof. Step 1. First one shows that t and D do not depend on the particular choice of the points c<d in (a, b). Here only the case where the point d is replaced by some point d with b<sup>0</sup> < d- < b is considered. For the sake of definiteness assume that d<d-< b.

To see that in the definition (6.9.10) of D the point d may be replaced by the point dobserve that for f ∈ AC(a, b) one has on (d, b):

$$N\_{\phi\_b} f = \sqrt{p} \phi\_b \left(\frac{f}{\phi\_b}\right)' = \sqrt{p} f' - \frac{1}{\sqrt{p}} \left(f \frac{p \phi\_b'}{\phi\_b}\right). \tag{6.9.13}$$

Consider the compact interval K = [d, d- ] and recall that the nonoscillatory solution φ<sup>b</sup> does not vanish on K. Then the last term on the right-hand side of (6.9.13) belongs to L2(K) because 1/ <sup>√</sup><sup>p</sup> <sup>∈</sup> <sup>L</sup>2(K), while the remaining factor is bounded because f ∈ AC(a, b), pφ- <sup>b</sup> <sup>∈</sup> AC(a, b), and <sup>φ</sup><sup>b</sup> <sup>∈</sup> AC(a, b). Hence, <sup>N</sup>φ<sup>b</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(K) if and only if √pf- <sup>∈</sup> <sup>L</sup>2(K). From this observation it follows directly that <sup>D</sup> does not depend on the particular choice of the point d.

Next consider the right-hand side of (6.9.11) with the point d, subtract from it the right-hand side of (6.9.11) with d replaced by the point d- , and observe that

$$\begin{aligned} \int\_{d}^{d'} (N\_{\phi\_b} f)(x) \overline{(N\_{\phi\_b} g)(x)} \, dx + \lambda\_0 \int\_{d}^{d'} f(x) \overline{g(x)} r(x) \, dx \\ = \int\_{d}^{d'} \left( (\sqrt{p} f')(x) \overline{(\sqrt{p} g')(x)} + q(x) f(x) \overline{g(x)} \right) dx \\ + \frac{(p \phi\_b')(d)}{\phi\_b(d)} f(d) \overline{g(d)} - \frac{(p \phi\_b')(d')}{\phi\_b(d')} f(d') \overline{g(d')}, \end{aligned}$$

which follows from (6.9.7) in Lemma 6.9.3 with the nonvanishing solution φ<sup>b</sup> on the interval [d, d- ]. This shows that t in (6.9.11) does not depend on the choice of the point d ∈ (b0, b).

Step 2. Next as a preparation for the rest of the proof one defines forms on each of the disjoint intervals (a, c), (c, d), and (d, b). The properties of these forms are given in Theorem 6.8.5 and Lemma 6.9.1. In the next steps it will be shown how the information about each of the separate intervals can be pieced together for the form t. For the interval (a, c) define the form t(a,c) by

$$\mathfrak{t}\_{(a,c)}[f,g] = \int\_{a}^{c} (N\_{\phi\_a}f)(x) \overline{(N\_{\phi\_a}g)(x)} \, dx + \lambda\_0 \int\_{a}^{c} f(x) \overline{g(x)} r(x) \, dx,$$

on the domain

$$\mathfrak{D}\_{(a,c)} = \left\{ f \in L^2\_r(a,c) : f \in AC(a,c), \ N\_{\phi\_a} f \in L^2(a,c) \right\},$$

and, likewise, for the interval (d, b) define the form t(d,b) by

$$\mathfrak{A}\_{(d,b)}[f,g] = \int\_{d}^{b} (N\_{\phi\_b}f)(x)\overline{(N\_{\phi\_b}g)(x)}\,dx + \lambda\_0 \int\_{d}^{b} f(x)\overline{g(x)}r(x)\,dx,$$

on the domain

$$\mathfrak{D}\_{(d,b)} = \left\{ f \in L^2\_r(d,b) : f \in AC(d,b), \ N\_{\phi\_b} f \in L^2(d,b) \right\}.$$

The forms t(a,c) and t(c,d) are clearly bounded from below:

$$\mathfrak{t}\_{(a,c)}[f] \ge \lambda\_0 \int\_a^c |f(x)|^2 \, r(x) \, dx, \quad f \in \mathfrak{D}\_{(a,c)},$$

and

$$\mathfrak{t}\_{(d,b)}[f] \ge \lambda\_0 \int\_d^b |f(x)|^2 \, r(x) \, dx, \quad f \in \mathfrak{D}\_{(d,b)}\omega$$

It is a consequence of Lemma 6.9.1 that the forms t(a,c) and t(d,b) are closed. For the interval (c, d) define the form t(c,d) by

$$\begin{aligned} \mathfrak{A}\_{(c,d)}[f,g] &= \int\_c^d \left( (\sqrt{p}f')(x) \overline{(\sqrt{p}g')(x)} + q(x)f(x)\overline{g(x)} \right) dx \\ &+ \frac{(p\phi\_a')(c)}{\phi\_a(c)} f(c) \overline{g(c)} - \frac{(p\phi\_b')(d)}{\phi\_b(d)} f(d) \overline{g(d)}, \end{aligned}$$

on the domain

$$\mathfrak{D}\_{(c,d)} = \left\{ f \in L^2\_r(c,d) : f \in AC(c,d), \ \sqrt{p}f' \in L^2(c,d) \right\}.$$

It is a consequence of (6.9.12) and Theorem 6.8.5 with

$$
\Theta = \begin{pmatrix}
\frac{(p\phi\_a')(c)}{\phi\_a(c)} & 0 \\
0 & -\frac{(p\phi\_b')(d)}{\phi\_b(d)}
\end{pmatrix},
$$

that in L<sup>2</sup> <sup>r</sup>(c, d) the form t(c,d) is closed and bounded from below:

$$\mathfrak{t}\_{(c,d)}[f] \ge C\_{(c,d)} \int\_c^d |f(x)|^2 \, r(x) \, dx, \quad f \in \mathfrak{D}\_{(c,d)}.$$

Step 3. It will be shown that the form t is bounded from below in L<sup>2</sup> <sup>r</sup>(a, b). Let f ∈ D. Then the restrictions of f to the intervals (a, c), (c, d), and (d, b) belong to D(a,c), D(c,d), and D(d,b), respectively, and

$$\mathfrak{t}[f] = \mathfrak{t}\_{(a,c)}[f] + \mathfrak{t}\_{(c,d)}[f] + \mathfrak{t}\_{(d,b)}[f].$$

This decomposition shows that

$$\text{tr}[f] \ge \lambda\_0 \left( \int\_a^c + \int\_d^b \right) |f(x)|^2 \, r(x) \, dx + C\_{\{c,d\}} \int\_c^d |f(x)|^2 \, r(x) \, dx.$$

Hence, it follows that

$$\text{tr}[f] \ge M \int\_{a}^{b} |f(x)|^{2} \, r(x) \, dx, \quad M = \min\left\{\lambda\_{0}, C\_{\text{(c,d)}}\right\},$$

for all <sup>f</sup> <sup>∈</sup> <sup>D</sup>. Thus, the form <sup>t</sup> is bounded from below in <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b).

Step 4. It will be shown that the form t is closed in L<sup>2</sup> <sup>r</sup>(a, b). For this, let f<sup>n</sup> ∈ D be a sequence such that <sup>f</sup><sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and t[f<sup>n</sup> − fm] → 0. One needs to establish that f ∈ D = dom t and t[f<sup>n</sup> − f] → 0. It is clear from the assumption that the restrictions to the intervals (a, c), (c, d), and (d, b) satisfy

$$\begin{aligned} f\_n &\to f \quad \text{in} \quad L^2\_r(a,c) \quad \text{and} \quad \mathfrak{t}\_{(a,c)}[f\_n - f\_m] \to 0, \\ f\_n &\to f \quad \text{in} \quad L^2\_r(c,d) \quad \text{and} \quad \mathfrak{t}\_{(c,d)}[f\_n - f\_m] \to 0, \\ f\_n &\to f \quad \text{in} \quad L^2\_r(d,b) \quad \text{and} \quad \mathfrak{t}\_{(d,b)}[f\_n - f\_m] \to 0, \end{aligned}$$

respectively. It follows from Lemma 6.9.1 for the intervals (a, c) and (d, b), and from Theorem 6.8.5 for the interval (c, d), that

$$\begin{aligned} f \in AC(a, c), \quad N\_{\phi\_a} f \in L^2(a, c), \quad N\_{\phi\_a} f\_n \to N\_{\phi\_a} f, \\ f \in AC(c, d), \quad \sqrt{p} f' \in L^2(c, d), \quad \mathfrak{t}\_{(c, d)}[f\_n - f] \to 0, \\ f \in AC(d, b), \quad N\_{\phi\_b} f \in L^2(d, b), \quad N\_{\phi\_b} f\_n \to N\_{\phi\_b} f, \end{aligned}$$

respectively. It remains to show that f ∈ AC(a, b).

Recall from Step 1 that the definition of t is independent of the choice of interval (c, d). By enlarging (c, d) to (c- , d- ) with c- < c and d<d one concludes that also f ∈ AC(c- , d- ). Hence, there is absolute continuity across the points c and d. Thus, f ∈ AC(a, b) and consequently f ∈ dom t while t[f<sup>n</sup> − f] → 0. One concludes that the form t is closed in L<sup>2</sup> <sup>r</sup>(a, b). -

The Green formula in the following lemma will give the connection between the differential expression L and the form t defined in (6.9.10) and (6.9.11).

**Lemma 6.9.5.** Assume that φ<sup>a</sup> and φ<sup>b</sup> are real nonoscillatory solutions of the equation (L − λ0)y = 0 which do not vanish on (a, a0) and on (b0, b), respectively. Let [c, d] be as in (6.9.9) and assume that f, pf- , g ∈ AC(a, b). Then for any choice of a and b with a<a- < c and d<b-< b one has

$$\begin{split} &\int\_{a'}^{b'} (Lf)(x) \overline{g(x)} r(x) \, dx \\ &= W\nu(f,\phi\_b) \left(\frac{\overline{g}}{\phi\_b}\right) (b') - W\_{a'}(f,\phi\_a) \left(\frac{\overline{g}}{\phi\_a}\right) (a') \\ &\quad + \int\_{a'}^{c} (N\_{\phi\_a} f)(x) \overline{(N\_{\phi\_a} g)(x)} \, dx + \int\_{d}^{b'} (N\_{\phi\_b} f)(x) \overline{(N\_{\phi\_b} g)(x)} \, dx \\ &\quad + \lambda\_0 \int\_{a'}^{c} f(x) \overline{g(x)} r(x) \, dx + \lambda\_0 \int\_{d}^{b'} f(x) \overline{g(x)} r(x) \, dx \\ &\quad + \int\_{c}^{d} \left( (\sqrt{p} f')(x) \overline{(\sqrt{p} g')(x)} + q(x) f(x) \overline{g(x)} \right) dx \\ &\quad + \frac{(p\phi\_a')(c)}{\phi\_a(c)} f(c) \overline{g(c)} - \frac{(p\phi\_b')(d)}{\phi\_b(d)} f(d) \overline{g(d)}. \end{split} \tag{6.9.14}$$

Proof. Split the integral on the left-hand side of (6.9.14) into three integrals over the subintervals (a- , c), [c, d], and (d, b- ), and evaluate each integral by partial integration. Recall that the integral over the compact interval [c, d] gives the usual formula

$$\begin{aligned} \int\_c^d (Lf)(x)\overline{g(x)}r(x) \,dx &= -(pf')(x)\overline{g(x)}\Big|\_c^d \\ &+ \int\_c^d \left( (\sqrt{p}f')(x)\overline{(\sqrt{p}g')(x)} + q(x)f(x)\overline{g(x)} \right)dx. \end{aligned}$$

As suggested by Lemma 6.9.2, the integral over the interval (a- , c) can be written as

$$\begin{aligned} \int\_{a'}^c (Lf)(x) \overline{g(x)} r(x) \, dx &= W\_x(f, \phi\_a) \left( \frac{\overline{g}}{\phi\_a} \right) (x) \Big|\_{a'}^c \\ &+ \int\_{a'}^c (N\_{\phi\_a} f)(x) \overline{(N\_{\phi\_a} g)(x)} \, dx + \lambda\_0 \int\_{a'}^c f(x) \overline{g(x)} r(x) \, dx \end{aligned}$$

and the integral over the interval (d, b- ) can be written as

$$\begin{aligned} \int\_{d}^{b'} (Lf)(x) \overline{g(x)} r(x) \, dx &= W\_x(f, \phi\_b) \left( \frac{\overline{g}}{\phi\_b} \right)(x) \Big|\_{d}^{b'} \\ &+ \int\_{d}^{b'} (N\_{\phi\_b} f)(x) \overline{(N\_{\phi\_b} g)(x)} + \lambda\_0 \int\_{d}^{b'} f(x) \overline{g(x)} r(x) \, dx. \end{aligned}$$

Combining the out-integrated parts one obtains

$$\begin{split} W\_x(f, \phi\_a) \left(\frac{\overline{g}}{\phi\_a}\right)(x) \Big|\_{a'}^c &- (pf')(x) \overline{g(x)} \Big|\_c^d + W\_x(f, \phi\_b) \left(\frac{\overline{g}}{\phi\_b}\right)(x) \Big|\_d^{b'} \\ &= W\_{b'}(f, \phi\_b) \left(\frac{\overline{g}}{\phi\_b}\right)(b') - W\_{a'}(f, \phi\_a) \left(\frac{\overline{g}}{\phi\_a}\right)(a') \\ &+ \frac{(p\phi\_a')(c)}{\phi\_a(c)} f(c) \overline{g(c)} - \frac{(p\phi\_b')(d)}{\phi\_b(d)} f(d) \overline{g(d)}, \end{split}$$

which yields the identity (6.9.14). -

A combination of Lemma 6.9.4 and Lemma 6.9.5 leads to the following theorem, which describes the interplay between Tmax and the form t.

**Theorem 6.9.6.** Assume the conditions in (6.1.2) and (6.9.1). Let φ<sup>a</sup> and φ<sup>b</sup> be real nonoscillatory solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> which do not vanish on (a, a0) and on (b0, b), respectively, and let [c, d] be as in (6.9.9). Let t and D = dom t be defined as in (6.9.10) and (6.9.11), so that t is a closed semibounded form. Assume that f ∈ dom Tmax ∩ D and g ∈ D. Then

$$\begin{split} (T\_{\max}f,g)\_{L^{2}\_{r}(a,b)} &= \mathfrak{t}[f,g] + \lim\_{b' \to b} W\_{b'}(f,\phi\_{b}) \left(\frac{\overline{g}}{\phi\_{b}}\right)(b') \\ &- \lim\_{a' \to a} W\_{a'}(f,\phi\_{a}) \left(\frac{\overline{g}}{\phi\_{a}}\right)(a'), \end{split} \tag{6.9.15}$$

where each of the limits exists in C. Furthermore, the form t is densely defined and Tmin ⊂ S1, where S<sup>1</sup> is the semibounded self-adjoint operator corresponding to t and, in fact,

$$(T\_{\min}f,g)\_{L^{2}\_{r}(a,b)} = \mathfrak{t}[f,g], \quad f \in \text{dom}\,T\_{\min} \subset \mathfrak{D}, g \in \mathfrak{D}.\tag{6.9.16}$$

In particular, Tmin is bounded from below and the form tS<sup>F</sup> corresponding to the Friedrichs extension S<sup>F</sup> of Tmin satisfies

$$\mathbf{t}\_{\mathrm{SF}} \subset \mathbf{t}.\tag{6.9.17}$$

Proof. The assumptions f ∈ dom Tmax ∩ D and g ∈ D show that in (6.9.14) the term on the left-hand side and the integrals on the right-hand side which involve a and b have limits when a- → a or b-→ b. Therefore, the limits of

$$W\_{a'}(f, \phi\_a) \left(\frac{\overline{g}}{\phi\_a}\right)(a') \quad \text{and} \quad W\_{b'}(f, \phi\_b) \left(\frac{\overline{g}}{\phi\_b}\right)(b')$$

exist when a- → a or b- → b. The definitions in (6.9.10) and (6.9.11) then lead to the identity (6.9.15).

To show that dom Tmin ⊂ D and (6.9.16) holds, take first f ∈ dom Tmin with compact support. In other words, f ∈ dom T0, where T<sup>0</sup> denotes the preminimal operator. Then f, pf- <sup>∈</sup> AC(a, b) and hence <sup>√</sup>pf- <sup>∈</sup> <sup>L</sup><sup>2</sup>(c, d) is clear. It is claimed that <sup>N</sup><sup>φ</sup><sup>a</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup>(a, c) and <sup>N</sup><sup>φ</sup><sup>b</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup>(d, b). Only the last inclusion will be shown. Consider the identity

$$N\_{\phi\_b}f = \sqrt{p}\phi\_b \left(\frac{f}{\phi\_b}\right)' = \frac{1}{\sqrt{p}} \left[pf' - p\phi\_b'\frac{f}{\phi\_b}\right],$$

and note that the functions f, pf- ∈ AC(a, b) have compact support, φ<sup>b</sup> ∈ AC(a, b) does not vanish on [d, b), and pφ- <sup>b</sup> ∈ AC(a, b). Hence, pf- − pφ- <sup>b</sup> (f /φb) vanishes in a neighborhood of <sup>b</sup> and is bounded on [d, b). Since 1/p <sup>∈</sup> <sup>L</sup><sup>1</sup> loc(a, b) by the assumption (6.1.2) it follows that <sup>N</sup>φ<sup>b</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(d, b). Therefore,

$$\text{dom}\,T\_0 \subset \mathfrak{D}$$

and, in particular, t is densely defined; cf. Theorem 6.2.1. For f ∈ dom T<sup>0</sup> and g ∈ D the identity

$$(T\_0 f, g)\_{L^2\_r(a, b)} = \mathfrak{t}[f, g] \tag{6.9.18}$$

follows immediately from (6.9.15). By Lemma 6.9.4, the form t is closed and bounded from below. Hence, by the first representation theorem, there exists a self-adjoint operator S<sup>1</sup> in L<sup>2</sup> <sup>r</sup>(a, b) which is bounded from below such that

$$(S\_1 f, g)\_{L^2\_r(a, b)} = \mathfrak{t}[f, g]\_r$$

holds for all f ∈ dom T ⊂ D and g ∈ D. It follows from (6.9.18) and Theorem 5.1.18 that T<sup>0</sup> ⊂ S1, and hence also T<sup>0</sup> = Tmin ⊂ S1. In particular, Tmin is bounded from below, one has dom Tmin ⊂ D, and (6.9.16) holds.

In order to verify (6.9.17) consider the form t0[f,g]=(T0f,g)L<sup>2</sup> <sup>r</sup>(a,b) defined on dom T0. Then one has t<sup>0</sup> ⊂ t by (6.9.16). Since t is closed, the closure of t0, which coincides with the form tS<sup>F</sup> corresponding to the Friedrichs extension S<sup>F</sup> in Definition 5.3.2, is contained in t. This leads to (6.9.17). -

Note that the existence of nonoscillatory solutions implies the semiboundedness of Tmin . The following result will lead to a converse statement.

**Lemma 6.9.7.** Assume the conditions in (6.1.2) and (6.9.1) and assume that Tmin is bounded from below. Let u be a real solution of the equation (L − λ0)y = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> and assume that <sup>u</sup> has at least two zeros in (a, b). Then <sup>λ</sup><sup>0</sup> <sup>≥</sup> <sup>m</sup>(Tmin ). Consequently, for λ<sup>0</sup> < m(Tmin ) any real solution of (L − λ0)y = 0 has at most one zero on (a, b).

Proof. Let <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> and assume that <sup>u</sup> is a real solution of the equation (L−λ0)<sup>y</sup> = 0 which has two zeros α<β. Denote the maximal and minimal Sturm–Liouville operator in L<sup>2</sup> <sup>r</sup>(α, β) by Tmax (α, β) and Tmin (α, β). Likewise, the preminimal operator in L<sup>2</sup> <sup>r</sup>(α, β) is denoted by T0(α, β). Denote the Friedrichs extension of Tmin (α, β) by SF(α, β). By Theorem 6.8.5 (iii), it is clear that the restriction of u to [α, β] is an eigenelement of SF(α, β) with eigenvalue λ0, and therefore λ<sup>0</sup> ≥ m(SF(α, β)). Now recall from Lemma 5.3.1 and Definition 5.3.2 that

$$m(S\_{\mathcal{F}}(\alpha,\beta)) = m(T\_{\min}(\alpha,\beta)) = m(T\_0(\alpha,\beta)),$$

and obviously

$$m(T\_0(\alpha, \beta)) \ge m(T\_0) = m(T\_{\min}).$$

Therefore, <sup>λ</sup><sup>0</sup> <sup>≥</sup> <sup>m</sup>(Tmin ). -

**Theorem 6.9.8.** Assume the conditions in (6.1.2) and (6.9.1). Then the operator <sup>T</sup>min is bounded from below if and only if there exist <sup>λ</sup><sup>a</sup> <sup>∈</sup> <sup>R</sup> and <sup>λ</sup><sup>b</sup> <sup>∈</sup> <sup>R</sup> such that (L − λa)y = 0 and (L − λb)y = 0 are nonoscillatory at a and b, respectively.

Proof. By Theorem 6.9.6, the existence of nonoscillatory solutions of (L−λ0)y = 0 for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> implies that <sup>T</sup>min is bounded from below. Conversely, if <sup>T</sup>min is bounded from below, then according to Lemma 6.9.7 the equation (L − λ0)y = 0 is nonoscillatory at both endpoints for any λ<sup>0</sup> < m(Tmin ). -

**Remark 6.9.9.** If one of the endpoints is regular, then the appearance of the form t in (6.9.10) and (6.9.11) becomes somewhat simpler. Assume for instance that the endpoint a is regular and that φ<sup>b</sup> is a real nonoscillatory solution of (L−λ0)y = 0 that does not vanish on an open interval (b0, b). Let b<sup>0</sup> <d<b and define the linear space D by

$$\mathfrak{D} = \left\{ f \in L^2\_r(a, b) : f \in AC(a, b), \ \sqrt{p}f' \in L^2(a, d), \ N\_{\phi\_b} f \in L^2(d, b) \right\}, \tag{6.9.19}$$

and the form t by

$$\begin{split} \mathfrak{t}[f,g] &= \int\_{d}^{b} (N\_{\phi b}f)(x) \overline{(N\_{\phi b}g)(x)} \, dx + \lambda\_0 \int\_{d}^{b} f(x) \overline{g(x)} r(x) \, dx \\ &\quad + \int\_{a}^{d} \left( (\sqrt{p}f')(x) \overline{(\sqrt{p}g')}(x) + q(x)f(x) \overline{g(x)} \right) dx \\ &\quad - \frac{(p\phi\_b')(d)}{\phi\_b(d)} f(d) \overline{g(d)}, \end{split} \tag{6.9.20}$$

where f,g ∈ D. The form t and its domain D in (6.9.19) and (6.9.20) do not depend on the particular choice of the point d in (b0, b). For the corresponding Green formula, assume that f, pf- , g <sup>∈</sup> AC(a, b) and that f, Lf <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, b). Then

for any choice of b with a<d<b-< b one has

$$\begin{split} &\int\_{a}^{b'} (Lf)(x) \overline{g(x)} r(x) \, dx \\ & \quad = W\_{b'}(f, \phi\_b) \left( \frac{\overline{g}}{\phi\_b} \right) (b') + (pf')(a) \overline{g(a)} \\ & \quad + \int\_{d}^{b'} (N\_{\phi\_b} f)(x) \overline{(N\_{\phi\_b} g)(x)} \, dx + \lambda\_0 \int\_{d}^{b'} f(x) \overline{g(x)} r(x) \, dx \\ & \quad + \int\_{a}^{d} \left( (\sqrt{p} f')(x) \overline{(\sqrt{p} g')}(x) + q(x) f(x) \overline{g(x)} \right) \, dx \\ & \quad - \frac{(p \phi\_b')(d)}{\phi\_b(d)} f(d) \overline{g(d)}. \end{split}$$

Let t and D be as in (6.9.19) and (6.9.20), and assume that f ∈ dom Tmax ∩ D and g ∈ D. Then the formula (6.9.15) becomes

$$\mathfrak{t}\_{\mathsf{t}}(T\_{\max}f,g) = \mathfrak{t}[f,g] + (pf')(a)\overline{g(a)} + \lim\_{b' \to b} W\_{b'}(f,\phi\_b) \left(\frac{\overline{g}}{\phi\_b}\right)(b'),\tag{6.9.21}$$

where the limit exists in C.

## **6.10 Principal and nonprincipal solutions of Sturm–Liouville equations**

This section contains a further treatment of the nonoscillatory solutions of a Sturm–Liouville equation. The forms in Section 6.9 were defined by means of solutions of (L − λ0)y = 0 which are nonoscillatory near an endpoint. The nonoscillatory solutions φ will now be further classified as nonprincipal and principal, depending on the square-integrability of (pφ2)−<sup>1</sup> near an endpoint. The main result in this section is Theorem 6.10.9 which concerns the square-integrability of Nφf near an endpoint when f is absolutely continuous, and it will be obtained by means of Lemma 6.10.1 and Theorem 6.10.4 below. In the following Section 6.11 and Section 6.12 there is a detailed treatment of the cases where the endpoint a and the endpoint b are either in the limit-circle case or in the limit-point case, respectively, and where the corresponding form in (6.9.10) and (6.9.11) will be defined in terms of nonprincipal and principal solutions.

The following auxiliary results concern a measurable function <sup>P</sup> : (a, b) <sup>→</sup> <sup>R</sup> which satisfies

$$1/P \in L\_{\text{loc}}^1(a, b) \quad \text{and} \quad P(x) > 0 \quad \text{for almost all } x \in (a, b). \tag{6.10.1}$$

They are stated with respect to the endpoint a of the interval (a, b); the corresponding results for the other endpoint are clear.

**Lemma 6.10.1.** Let the function P satisfy (6.10.1) and let a<α<b. Assume that

$$
\varphi \in AC(a, \alpha) \quad \text{and} \quad \sqrt{P}\varphi' \in L^2(a, \alpha).
$$

Then the following statements hold:

(i) One has

$$\lim\_{x \to a} \frac{\varphi(x)^2}{\int\_x^{\alpha} \frac{1}{P(t)} \, dt} \in \mathbb{C} \quad \text{and} \quad \int\_a^{\alpha'} \frac{|\varphi(t)|^2}{P(t)(\int\_t^{\alpha} \frac{1}{P(s)} \, ds)^2} \, dt < \infty$$

for any a<α-< α. Moreover,

$$\int\_{a}^{\alpha} \frac{1}{P(t)} \, dt = \infty \quad \Rightarrow \quad \lim\_{x \to a} \frac{\varphi(x)^2}{\int\_{x}^{\alpha} \frac{1}{P(t)} \, dt} = 0.$$

(ii) Assume that <sup>α</sup> a 1 <sup>P</sup> (t) dt < <sup>∞</sup>. Then limx→<sup>a</sup> <sup>ϕ</sup>(x) exists in <sup>C</sup> and one has

$$\lim\_{x \to a} \varphi(x) = 0 \quad \Rightarrow \quad \int\_{a}^{\alpha} \frac{|\varphi(t)|^{2}}{P(t)(\int\_{a}^{t} \frac{1}{P(s)} \, ds)^{2}} \, dt < \infty.$$

Moreover,

$$\lim\_{x \to a} \varphi(x) = 0 \quad \Leftrightarrow \quad \lim\_{x \to a} \frac{\varphi(x)^2}{\int\_a^x \frac{1}{\overline{P(t)}} dt} = 0 \quad \Leftrightarrow \quad \lim\_{x \to a} \frac{\varphi(x)^2}{\int\_a^x \frac{1}{\overline{P(t)}} dt} \in \mathbb{C}.$$

Proof. In the following proof it will be assumed without loss of generality that the function ϕ is real.

(i) As abbreviation use the notation H(t) = <sup>α</sup> t 1 <sup>P</sup> (s) ds, a<t<α. Then the identities

$$PH\left(\left(\frac{\varphi}{\sqrt{H}}\right)'\right)^2 = P\left(\varphi' + \frac{1}{2}\frac{\varphi}{PH}\right)^2 = P(\varphi')^2 + \frac{\varphi\varphi'}{H} + \frac{1}{4}\frac{\varphi^2}{PH^2}$$

and

$$\frac{1}{2} \left( \frac{\varphi^2}{H} \right)' = \frac{\varphi \varphi'}{H} + \frac{1}{2} \frac{\varphi^2}{PH^2}$$

yield

$$PH\left(\left(\frac{\varphi}{\sqrt{H}}\right)'\right)^2 = P(\varphi')^2 - \frac{1}{4}\frac{\varphi^2}{PH^2} + \frac{1}{2}\left(\frac{\varphi^2}{H}\right)'.\tag{6.10.2}$$

In particular, one sees that

$$P(\varphi')^2 \ge \frac{1}{4} \frac{\varphi^2}{PH^2} - \frac{1}{2} \left(\frac{\varphi^2}{H}\right)'.$$

Integrate this last inequality over [x, α- ] with a<x<α-< α. Then

$$\begin{aligned} \int\_{x}^{\alpha'} P(t) \varphi'(t)^2 \, dt &\geq \frac{1}{4} \int\_{x}^{\alpha'} \frac{\varphi(t)^2}{P(t)H(t)^2} \, dt + \frac{1}{2} \frac{\varphi(x)^2}{H(x)} - \frac{1}{2} \frac{\varphi(\alpha')^2}{H(\alpha')}\\ &\geq \frac{1}{4} \int\_{x}^{\alpha'} \frac{\varphi(t)^2}{P(t)H(t)^2} \, dt - \frac{1}{2} \frac{\varphi(\alpha')^2}{H(\alpha')}. \end{aligned}$$

Since <sup>√</sup> P ϕ is square-integrable, taking the limit x → a shows the integrability result in (i). Hence, both terms on the right-hand side of the estimate

$$PH\left(\left(\frac{\varphi}{\sqrt{H}}\right)'\right)^2 = P\left(\varphi' + \frac{1}{2}\frac{\varphi}{PH}\right)^2 \le 2P(\varphi')^2 + \frac{1}{2}\frac{\varphi^2}{PH^2}$$

are integrable on (a, α- ). Therefore, in view of (6.10.2), one sees that the limit limx→<sup>a</sup> <sup>ϕ</sup>(x)<sup>2</sup> <sup>H</sup>(x) exists in <sup>C</sup>. Thus, the first two statements in (i) have been proved.

In order to prove the last statement in (i) assume that <sup>α</sup> a 1 <sup>P</sup> (t) dt = ∞. Then

$$\begin{split} \int\_{a}^{\alpha'} \frac{1}{P(t)H(t)} \, dt &= \lim\_{x \to a} \left( \int\_{x}^{\alpha'} \frac{1}{P(t)H(t)} \, dt \right) \\ &= \lim\_{x \to a} \left( \log H(x) - \log H(\alpha') \right) = \infty; \end{split} \tag{6.10.3}$$

note that (log H)-= −1/P H. In view of

$$\int\_{a}^{\alpha'} \frac{1}{P(t)H(t)} \frac{\varphi(t)^2}{H(t)} \, dt = \int\_{a}^{\alpha'} \frac{\varphi(t)^2}{P(t)H(t)^2} \, dt < \infty,$$

it follows from (6.10.3) that actually limx→<sup>a</sup> <sup>ϕ</sup>(x)<sup>2</sup> <sup>H</sup>(x) = 0, which is the limit result in assertion (i).

(ii) Assume that <sup>α</sup> a 1 <sup>P</sup> (t) dt < ∞. First observe that for a<y<x<α the identity

$$
\varphi(x) - \varphi(y) = \int\_y^x \sqrt{P(t)} \varphi'(t) \frac{1}{\sqrt{P(t)}} \, dt
$$

together with the Cauchy–Schwarz inequality gives

$$|\varphi(x) - \varphi(y)|^2 \le \left| \int\_y^x \frac{1}{P(t)} \, dt \right| \left| \int\_y^x P(t) \varphi'(t)^2 \, dt \right|.$$

Due to the assumptions it is now clear that limy→<sup>a</sup> <sup>ϕ</sup>(y) exists in <sup>C</sup>. Moreover, if, in particular, limy→<sup>a</sup> ϕ(y) = 0, then the above inequality shows that

$$|\varphi(x)|^2 \le \left| \int\_a^x \frac{1}{P(t)} \, dt \right| \left| \int\_a^x P(t) \varphi'(t)^2 \, dt \right|.$$

Hence, the implications

$$\lim\_{x \to a} \varphi(x) = 0 \quad \Rightarrow \quad \lim\_{x \to a} \frac{\varphi(x)^2}{\int\_a^x \frac{1}{P(t)} \, dt} = 0 \quad \Rightarrow \quad \lim\_{x \to a} \frac{\varphi(x)^2}{\int\_a^x \frac{1}{P(t)} \, dt} \in \mathbb{C}$$

are clear. Furthermore, since lim<sup>y</sup>→<sup>a</sup> <sup>ϕ</sup>(y) exists in <sup>C</sup>,

$$\lim\_{x \to a} \frac{\varphi(x)^2}{\int\_a^x \frac{1}{P(t)} \, dt} \in \mathbb{C} \quad \Rightarrow \quad \lim\_{x \to a} \varphi(x) = 0.$$

This proves the equivalences in the last statement of (ii).

For the rest of the proof of (ii) use as abbreviation the notation

$$G(t) = \int\_{a}^{t} \frac{1}{P(s)} \, ds, \qquad a < t < \alpha.$$

Note that the integral is well defined due to the assumption <sup>α</sup> a 1 <sup>P</sup> (t) dt < ∞. Then the identities

$$PG\left(\left(\frac{\varphi}{\sqrt{G}}\right)'\right)^2 = P\left(\varphi' - \frac{1}{2}\frac{\varphi}{PG}\right)^2 = P(\varphi')^2 - \frac{\varphi\varphi'}{G} + \frac{1}{4}\frac{\varphi^2}{PG^2}$$

and

$$\frac{1}{2} \left( \frac{\varphi^2}{G} \right)' = \frac{\varphi \varphi'}{G} - \frac{1}{2} \frac{\varphi^2}{PG^2}$$

lead to

$$PG\left(\left(\frac{\varphi}{\sqrt{G}}\right)'\right)^2 = P(\varphi')^2 - \frac{1}{4}\frac{\varphi^2}{PG^2} - \frac{1}{2}\left(\frac{\varphi^2}{G}\right)'.$$

In particular, one sees that

$$P(\varphi')^2 \ge \frac{1}{4} \frac{\varphi^2}{PG^2} + \frac{1}{2} \left(\frac{\varphi^2}{G}\right)'.$$

Integrate this last inequality over [x, α] with a<x<α. Then

$$\int\_{x}^{\alpha} P(t)\varphi'(t)^2 \,dt \ge \frac{1}{4} \int\_{x}^{\alpha} \frac{\varphi(t)^2}{P(t)G(t)^2} \,dt + \frac{1}{2} \frac{\varphi(\alpha)^2}{G(\alpha)} - \frac{1}{2} \frac{\varphi(x)^2}{G(x)}.$$

Recall that limx→<sup>a</sup> <sup>ϕ</sup>(x) = 0 is equivalent to limx→<sup>a</sup> <sup>ϕ</sup>(x)<sup>2</sup> √ /G(x) exists in C. Since P ϕ is square-integrable, the integrability result in (ii) follows. -

**Corollary 6.10.2.** Let the function P satisfy (6.10.1) and let a<α<b. Assume that

$$
\varphi, \psi \in AC(a, \alpha) \quad \text{and} \quad \sqrt{P}\varphi', \sqrt{P}\psi' \in L^2(a, \alpha).
$$

Then

$$\liminf\_{x \to a} \left| P(x)\varphi'(x)\psi(x) \right| = 0,\tag{6.10.4}$$

when either

$$\int\_{a}^{\alpha} \frac{1}{P(t)} \, dt = \infty$$

 $or$  
$$\int\_{a}^{\alpha} \frac{1}{P(t)} \, dt < \infty \quad and \quad \lim\_{x \to a} \psi(x) = 0.$$

Proof. According to Lemma 6.10.1 (i) applied to ψ one has with H(t) = <sup>α</sup> t 1 <sup>P</sup> (s) ds, a<t<α, that

$$\frac{\psi}{\sqrt{P}H} \in L^2(a, \alpha')$$

for any a<α- < α. The assumption <sup>α</sup> a 1 <sup>P</sup> (t) dt = ∞ implies that

$$\int\_{a}^{\alpha'} \frac{1}{P(t)H(t)} \, dt = \infty;\tag{6.10.5}$$

cf. (6.10.3). However, with <sup>√</sup> P ϕ- <sup>∈</sup> <sup>L</sup>2(a, α) one also sees via the Cauchy–Schwarz inequality that

$$\int\_{a}^{\alpha'} \left| P(t)\varphi'(t)\psi(t) \right| \frac{1}{P(t)H(t)} dt = \int\_{a}^{\alpha'} \left| \sqrt{P(t)}\varphi'(t) \frac{\psi(t)}{\sqrt{P(t)}H(t)} \right| dt < \infty.$$

Therefore, it follows from (6.10.5) that (6.10.4) holds.

Next assume that

$$\int\_{a}^{\alpha} \frac{1}{P(t)} \, dt < \infty \quad \text{and} \quad \lim\_{x \to a} \psi(x) = 0.$$

Then with G(t) = <sup>t</sup> a 1 <sup>P</sup> (s) ds, a<t<α, the assumption <sup>α</sup> a 1 <sup>P</sup> (t) dt < ∞ implies that

$$\begin{aligned} \int\_a^\alpha \frac{1}{P(t)G(t)} \, dt &= \lim\_{x \to a} \int\_x^\alpha \frac{1}{P(t)G(t)} \, dt \\ &= \lim\_{x \to a} \left( \log G(\alpha) - \log G(x) \right) = \infty; \end{aligned}$$

note that one has (log G)- = 1/P G. Thanks to the assumption limx→<sup>a</sup> ψ(x) = 0, Lemma 6.10.1 (ii) applied to ψ, shows that

$$\frac{\psi}{\sqrt{P}G} \in L^2(a,\alpha).$$

Combining this with the fact that <sup>√</sup> P ϕ- <sup>∈</sup> <sup>L</sup>2(a, α) one sees in a similar way as above that (6.10.4) holds. -

Now return to the Sturm–Liouville differential expression L given by (6.1.1) on the interval (a, b). In addition to the conditions in (6.1.2), it will be assumed that

$$p(x) > 0 \quad \text{for almost all } x \in (a, b).$$

If the equation (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, is nonoscillatory at the endpoint <sup>a</sup>, then its real solutions may be distinguished by the following properties.

**Definition 6.10.3.** Let (L−λ0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> be nonoscillatory at the endpoint a and let u and v be real solutions of (L−λ0)y = 0. Then u is said to be principal at a if 1/(pu2) is not integrable at a and v is said to be nonprincipal at a if 1/(pv2) is integrable at a.

It is clear that a real solution of (L−λ0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> is either principal or nonprincipal at a. In Theorem 6.10.4 and Corollary 6.10.5 below it turns out that a principal solution exists and is uniquely determined up to real multiples. Hence, every linearly independent solution is nonprincipal.

**Theorem 6.10.4.** Let (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> be nonoscillatory at the endpoint a. Then the following statements hold:

(i) Let u be a solution of (L − λ0)y = 0 which is principal at a. Assume that u does not vanish on (a, a0) and let α ∈ (a, a0). Then v is a real solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>W</sup>(u, v)=1 if and only if there exists <sup>γ</sup> <sup>∈</sup> <sup>R</sup> such that

$$v(x) = -u(x)\left(\gamma + \int\_x^\alpha \frac{ds}{p(s)u(s)^2}\right), \quad a < x < \alpha. \tag{6.10.6}$$

For each <sup>γ</sup> <sup>∈</sup> <sup>R</sup> the solution <sup>v</sup> is nonprincipal at <sup>a</sup> and

$$\int\_{a}^{x} \frac{dt}{p(t)v(t)^2} = \frac{1}{\gamma + \int\_{x}^{\alpha} \frac{dt}{p(t)u(t)^2}}, \quad a < x < a\_v,\tag{6.10.7}$$

when v does not vanish on (a, av).

(ii) Let v be a solution of (L − λ0)y = 0 which is nonprincipal at a. Assume that v does not vanish on (a, a0) and let α ∈ (a, a0). Then w is a real solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>W</sup>(v, w)=1 if and only if there exists <sup>γ</sup> <sup>∈</sup> <sup>R</sup> such that

$$w(x) = v(x)\left(\gamma + \int\_a^x \frac{ds}{p(s)v(s)^2}\right), \quad a < x < \alpha. \tag{6.10.8}$$

The solution w is principal at a if γ = 0 and nonprincipal at a if γ = 0, in which case

$$\int\_{a}^{x} \frac{dt}{p(t)w(t)^{2}} = \frac{1}{\gamma} - \frac{1}{\gamma + \int\_{a}^{x} \frac{1}{p(s)v(s)^{2}} ds}, \quad a < x < a\_{w}, \tag{6.10.9}$$

when w does not vanish on (a, aw).

Proof. (i) If v is given by (6.10.6), then it follows from the definition that

$$v'(x) = -u'(x)\left(\gamma + \int\_x^\alpha \frac{ds}{p(s)u(s)^2}\right) + \frac{1}{p(x)u(x)}$$

and

$$(pv')'(x) = -(pu')'(x)\left(\gamma + \int\_x^\alpha \frac{ds}{p(s)u(s)^2}\right).$$

Hence, v is a solution of (L − λ0)y = 0 and W(u, v) = 1. Conversely, if v is a solution with W(u, v) = 1, then it follows from (6.1.28) that

$$\int\_{x}^{\alpha} \frac{ds}{p(s)u(s)^2} = \frac{v(\alpha)}{u(\alpha)} - \frac{v(x)}{u(x)}, \quad a < x < \alpha,$$

which leads to (6.10.6). Observe that

$$\frac{d}{dt}\left(\frac{1}{\gamma + \int\_t^\alpha \frac{ds}{p(s)u(s)^2}}\right) = \frac{1}{p(t)u(t)^2} \frac{1}{\left(\gamma + \int\_t^\alpha \frac{ds}{p(s)u(s)^2}\right)^2} = \frac{1}{p(t)v(t)^2},$$

and consequently, for a<y<x<av:

$$\int\_{y}^{x} \frac{dt}{p(t)v(t)^2} = \frac{1}{\gamma + \int\_{x}^{\alpha} \frac{1}{p(s)u(s)^2} \, ds} - \frac{1}{\gamma + \int\_{y}^{\alpha} \frac{1}{p(s)u(s)^2} \, ds}.$$

Since u is principal at a, the last term on the right-hand side goes to 0 as y → a, so that v is nonprincipal at a, and (6.10.7) follows.

(ii) One verifies in the same way as in the proof of (i) that w is a real solution of (L−λ0)<sup>y</sup> = 0 with <sup>W</sup>(v, w) = 1 if and only if there exists <sup>γ</sup> <sup>∈</sup> <sup>R</sup> such that (6.10.8) holds. In a similar way as above observe that

$$\frac{d}{dt}\left(\frac{1}{\gamma + \int\_a^t \frac{ds}{p(s)v(s)^2}}\right) = -\frac{1}{p(t)v(t)^2} \frac{1}{\left(\gamma + \int\_a^t \frac{ds}{p(s)v(s)^2}\right)^2} = -\frac{1}{p(t)w(t)^2},$$

and one obtains for a<y<x<aw:

$$\int\_{y}^{x} \frac{dt}{p(t)w(t)^{2}} = \frac{1}{\gamma + \int\_{a}^{y} \frac{1}{p(s)v(s)^{2}} \, ds} - \frac{1}{\gamma + \int\_{a}^{x} \frac{1}{p(s)v(s)^{2}} \, ds}.$$

Since v is nonprincipal at a, one sees for γ = 0 that w is nonprincipal at a, and (6.10.9) follows. For γ = 0 it follows that w is principal at a. -

Theorem 6.10.4 allows some flexibility. It is clear from the proof of Theorem 6.10.4 that one can choose α = a<sup>0</sup> in (6.10.6) if u does not vanish on (a, a0] and one can choose α = a<sup>0</sup> in (6.10.8) if v does not vanish on (a, a0]. As to the nonoscillatory behavior of the solutions, observe that in (6.10.6) the factor

$$H\_{\alpha}(x) = \gamma + \int\_{x}^{\alpha} \frac{ds}{p(s)u(s)^2}, \quad a < x < a\_0, 1$$

is a decreasing function with lim<sup>x</sup>→<sup>a</sup> Hα(x) = ∞, while lim<sup>x</sup>→a<sup>0</sup> Hα(x) is finite or −∞, depending on whether u is nonprincipal or principal at a0. In any case, H<sup>α</sup> has at most one zero on (a, a0). Note that Hα(α) = γ, so that Hα(x) ≥ γ when a<x ≤ α.

If u and v are solutions which are principal and nonprincipal at a, respectively, and which are nonvanishing on (a, a0), then, without loss of generality, one may assume that W(u, v) = 1. By choosing a<α<a<sup>0</sup> it follows from Theorem 6.10.4 that <sup>v</sup> is of the form (6.10.6) for a unique <sup>γ</sup> <sup>∈</sup> <sup>R</sup>. Moreover, by construction one sees that γ > 0. A similar observation can be made about (6.10.8).

**Corollary 6.10.5.** Let (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> be nonoscillatory at the endpoint a. Then there exists a nontrivial solution of (L − λ0)y = 0 which is principal at a. This solution is unique up to real nonzero multiples. In fact, a nontrivial solution u is principal at a if and only if

$$\lim\_{x \to a} \frac{u(x)}{v(x)} = 0 \tag{6.10.10}$$

for all solutions v of (L − λ0)y = 0 which are linearly independent of u.

Proof. A solution of (L − λ0)y = 0 is either principal or nonprincipal at a, and by Theorem 6.10.4 any solution of (L − λ0)y = 0 which is nonprincipal at a generates a solution which is principal at a. Thus, there exists a nontrivial solution of (L − λ0)y = 0 which is principal at a. It also follows from Theorem 6.10.4 that a principal solution is unique up to real nonzero multiples.

If u is a solution of (L−λ0)y = 0 which is principal at a, then it follows from Theorem 6.10.4 (i) that for every solution v of (L − λ0)y = 0 with W(u, v)=1 one has u(x)/v(x) → 0 as x → a, that is, (6.10.10) holds.

If u is a solution of (L − λ0)y = 0 which is nonprincipal at a, then (6.10.10) does not hold. Indeed, Theorem 6.10.4 (ii) (with v replaced by u and w replaced by v) shows that for every solution v of (L − λ0)y = 0 with W(u, v) = 1 there exists <sup>γ</sup> <sup>∈</sup> <sup>R</sup> such that

$$\frac{u(x)}{v(x)} = \frac{1}{\gamma + \int\_a^x \frac{ds}{p(s)u(s)^2}}$$

for x ∈ (a, av) ⊂ (a, a0), where (a, av) is an interval on which v does not vanish. Hence, u(x)/v(x) → 1/γ as x → a when γ = 0 and u(x)/v(x) → ∞ as x → a when γ = 0. Therefore, (6.10.10) does not hold. -

Let <sup>a</sup> be regular, which means that <sup>a</sup> <sup>∈</sup> <sup>R</sup> and that <sup>a</sup><sup>0</sup> a 1 <sup>p</sup>(t) dt < ∞. For any real solution v of (L − λ0)y = 0 with v(a) = 0 it follows that

$$\int\_{a}^{a\_0} \frac{1}{p(t)v(t)^2} dt < \infty.$$

Hence, the principal solution u corresponds to u(a) = 0 (and (pu- )(a) = 0 as otherwise u would be trivial).

There is a refinement of the defining property of a principal solution when the endpoint a is in the limit-circle case.

**Corollary 6.10.6.** Let (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> be nonoscillatory at <sup>a</sup> and assume that a is in the limit-circle case. Let u be a principal solution and let v be a nonprincipal solution which do not vanish on the interval (a, a0), and let α ∈ (a, a0). Then

$$\frac{\int\_{a}^{x} u(t)^2 r(t) \, dt}{\int\_{a}^{x} v(t)^2 r(t) \, dt} \le \frac{u(x)^2}{v(x)^2}, \quad a < x < \alpha. \tag{6.10.11}$$

Proof. By assumption, u is principal and v is nonprincipal. Hence, according to Theorem 6.10.4 (i) v may be written as (6.10.6). Since a is in the limit-circle case, u, v <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, a0). It follows from (6.10.6) that for a<x<α

$$\left(\frac{v(x)}{u(x)}\right)^2 \int\_a^x u(t)^2 r(t) \, dt = \left(\gamma + \int\_x^\alpha \frac{ds}{p(s)u(s)^2}\right)^2 \int\_a^x u(t)^2 r(t) \, dt. \tag{6.10.12}$$

Moreover, since u and v do not vanish (6.10.6) also implies that

$$0 \le \gamma + \int\_x^\alpha \frac{ds}{p(s)u(s)^2} \le \gamma + \int\_t^\alpha \frac{ds}{p(s)u(s)^2}$$

for a<t<x. Therefore, the right-hand side of (6.10.12) can be estimated by

$$\int\_{a}^{x} u(t)^2 \left(\gamma + \int\_{t}^{\alpha} \frac{ds}{p(s)u(s)^2} \right)^2 r(t) \, dt = \int\_{a}^{x} v(t)^2 r(t) \, dt,$$

and the assertion follows. -

Returning to the general case of an endpoint, observe that the connections between the solutions in Theorem 6.10.4 give also the following connections between the associated Wronskians.

**Corollary 6.10.7.** Let (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> be nonoscillatory at the endpoint a and assume that f ∈ AC(a, a0). Then the following statements hold:

(i) Let u be a principal solution at a which does not vanish on (a, a0), let v be given by (6.10.6), and let α ∈ (a, a0). Then

$$W\_x(f, v) = W\_x(f, u) \frac{v(x)}{u(x)} + \frac{f(x)}{u(x)}, \quad a < x < \alpha. \tag{6.10.13}$$

(ii) Let v be a nonprincipal solution at a which does not vanish on (a, a0), let w be given by (6.10.8), and let α ∈ (a, a0). Then

$$W\_x(f, w) = W\_x(f, v) \frac{w(x)}{v(x)} + \frac{f(x)}{v(x)}, \quad a < x < \alpha. \tag{6.10.14}$$

$$\bot$$

The results about principal and nonprincipal solutions will now be applied in the context of the first-order differential expression (6.9.2) related to the Sturm– Liouville expression. Let φ be a real solution of (L − λ0)y = 0 which is nonoscillatory at the endpoint a and assume that φ does not vanish on (a, a0). Recall that the first-order differential expression N<sup>φ</sup> on (a, a0) is given by

$$N\_{\phi}f = \sqrt{p}\phi\left(\frac{f}{\phi}\right)' = -\frac{W(f,\phi)}{\sqrt{p}\phi} \tag{6.10.15}$$

for all functions f ∈ AC(a, a0); cf. (6.9.2). Note that for q = 0 one may take λ<sup>0</sup> = 0 and φ = 1, in which case Nφf = √pf; cf. (6.8.11). One basic observation is contained in the following lemma.

**Lemma 6.10.8.** Let (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> be nonoscillatory at the endpoint a. Then the following statements hold:

(i) Let u be a principal solution of (L − λ0)y =0(which is unique up to real multiples) and let v be a nonprincipal solution of (L − λ0)y = 0 such that W(u, v)=1. Assume that u and v do not vanish on (a, a0) and let α ∈ (a, a0). Then

$$((N\_u f)(x) - (N\_v f)(x) = \left(\frac{f}{\sqrt{p}uv}\right)(x), \quad a < x < \alpha,\tag{6.10.16}$$

for f ∈ AC(a, a0).

(ii) Let v and w be nonprincipal solutions of (L−λ0)y = 0 such that W(v, w)=1, assume that v and w do not vanish on (a, a0), and let α ∈ (a, a0). Then

$$(N\_v f)(x) - (N\_w f)(x) = \left(\frac{f}{\sqrt{p}vw}\right)(x), \quad a < x < \alpha,\tag{6.10.17}$$

for f ∈ AC(a, a0).

Proof. (i) As v is assumed to be a real solution of (L−λ0)y = 0 with W(u, v) = 1, it is given by (6.10.6) for some <sup>γ</sup> <sup>∈</sup> <sup>R</sup>. Hence, (6.10.13) holds and it follows that

$$\frac{1}{\sqrt{p}v}W(f,v) = \frac{1}{\sqrt{p}u}W(f,u) + \frac{f}{\sqrt{p}uv}.$$

Using (6.10.15) this implies (6.10.16).

(ii) Since w is assumed to be a real solution of (L − λ0)y = 0 with W(v, w) = 1, it is given by (6.10.8) for some real γ = 0. Hence, (6.10.14) holds and it follows that

$$\frac{1}{\sqrt{p}w}W(f,w) = \frac{1}{\sqrt{p}v}W(f,v) + \frac{f}{\sqrt{p}vw}.$$

Using (6.10.15) this implies (6.10.17). -

The following theorem is based on a direct application of Lemma 6.10.1. It shows the usefulness of the various types of solutions.

**Theorem 6.10.9.** Let (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> be nonoscillatory at the endpoint a. Then the following statements hold:

(i) Let v be a solution which is nonprincipal at a and assume that v does not vanish on the subinterval (a, a0). Let <sup>f</sup> <sup>∈</sup> AC(a, a0) and <sup>N</sup>v<sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup>(a, c) for a<c<a0. Then

$$\lim\_{x \to a} \frac{f(x)}{v(x)}$$

exists and is finite.

(ii) Let v and w be nonprincipal solutions with W(v, w)=1, assume that both v and w do not vanish on (a, a0), and let f ∈ AC(a, a0). Then

$$N\_v f \in L^2(a, c) \quad \Leftrightarrow \quad N\_w f \in L^2(a, c)$$

for a<c<a0.

(iii) Let u be a principal solution, let v be a nonprincipal solution such that W(u, v)=1, assume that both u and v do not vanish on (a, a0), and let f ∈ AC(a, a0). Then

$$N\_u f \in L^2(a, a') \quad \Leftrightarrow \quad N\_v f \in L^2(a, a'') \quad \text{and} \quad \lim\_{x \to a} \frac{f(x)}{v(x)} = 0$$

for some a<a- , a--< a0.

(iv) Let u be a principal solution which does not vanish on (a, a0). If f ∈ AC(a, a0) and <sup>N</sup>u<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, c) for a<c<a0, then

$$\lim\_{x \to a} \frac{f(x)}{u(x)} \left( \int\_a^x \frac{dt}{p(t)v(t)^2} \right)^{1/2} = 0$$

for any nonprincipal solution v at a.

Proof. (i) Let v be a nonprincipal solution and assume that f ∈ AC(a, a0) and <sup>N</sup>v<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, c). Now apply Lemma 6.10.1 (ii) with <sup>P</sup> <sup>=</sup> pv<sup>2</sup> and <sup>ϕ</sup> <sup>=</sup> f /v. In fact, in the present situation one has <sup>ϕ</sup> <sup>=</sup> f /v <sup>∈</sup> AC(a, c) and <sup>√</sup> P ϕ- <sup>=</sup> <sup>N</sup>v<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, c), and <sup>c</sup> a 1 <sup>P</sup> (t)dt < ∞, since v is nonprincipal at a. Therefore, Lemma 6.10.1 (ii) with α = c yields that the limit

$$\lim\_{x \to a} \frac{f(x)}{v(x)} = \lim\_{x \to a} \varphi(x)$$

exists and is finite.

(ii) By symmetry, it suffices to show the implication (⇒). Hence, assume that <sup>f</sup> <sup>∈</sup> AC(a, a0) and <sup>N</sup>v<sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup>(a, c). Since <sup>v</sup> is nonprincipal at <sup>a</sup> it follows from (i) that the limit

$$\lim\_{x \to a} \frac{f(x)}{v(x)}$$

exists and is finite. As v does not vanish in (a, a0) one has f /v ∈ AC(a, a0) and for c ∈ (a, a0) there exists M > 0 such that

$$\left|\frac{f(x)}{v(x)}\right| \le M, \quad x \in (a, c).$$

By Lemma 6.10.8 (ii),

$$(N\_v f)(x) - (N\_w f)(x) = \frac{1}{\sqrt{p(x)}w(x)} \left(\frac{f}{v}\right)(x). \tag{6.10.18}$$

Now it follows from (6.10.18) with 1/(pw2) integrable on (a, c) that

$$\int\_{a}^{c} |N\_{w}f(s)|^{2}ds \leq 2\int\_{a}^{c} |N\_{v}f(s)|^{2}ds + 2M^{2}\int\_{a}^{c} \frac{1}{p(s)w(s)^{2}}ds < \infty,$$

and hence <sup>N</sup>w<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, c).

(iii) (⇒) Assume that <sup>f</sup> <sup>∈</sup> AC(a, a0) and <sup>N</sup>u<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, a- ) for a<a- < a0. Since the principal solution u does not vanish on (a, a0), the function

$$H\_{a'}(x) = \int\_x^{a'} \frac{ds}{p(s)u(s)^2}, \quad a < x < a',$$

is well defined and Ha- (x) → ∞ as x → a. Then, according to Lemma 6.10.1 (i) with P = pu2, ϕ = f /u, α = a- , and α- = a-one obtains

$$\frac{1}{\sqrt{p}u} \left( \frac{f}{u} \right) \frac{1}{H\_{a'}} \in L^2(a, a'') \tag{6.10.19}$$

for any a<a-- < a- , and

$$\left(\left(\frac{f}{u}\right)(x)\right)^2 \frac{1}{H\_{a'}(x)} \to 0 \quad \text{as} \quad x \to a. \tag{6.10.20}$$

Since v is a nonprincipal solution it can be expressed in terms of u by means of Theorem 6.10.4 (i) with some γ > 0 as

$$v(x) = -u(x) \left(\gamma + H\_{a'}(x)\right), \quad a < x < a',$$

and consequently

$$\frac{u(x)}{v(x)}H\_{a'}(x) = -\frac{H\_{a'}(x)}{\gamma + H\_{a'}(x)}, \quad a < x < a'.$$

Since H<sup>a</sup>-(x) → ∞ as x → a, it is clear that

$$\left| \frac{u(x)}{v(x)} H\_{a'}(x) \right| \le 1, \quad a < x < a'. \tag{6.10.21}$$

Write (6.10.16) in Lemma 6.10.8 on the interval (a, a- ) as

$$N\_u f - N\_v f = \frac{1}{\sqrt{p}u} \left(\frac{f}{u}\right) \frac{1}{H\_{a'}} \frac{u}{v} H\_{a'}.\tag{6.10.22}$$

Obviously, (6.10.19) and (6.10.21) imply that the right-hand side of (6.10.22) belongs to L2(a, a--). As <sup>N</sup>u<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, a- ) this gives <sup>N</sup>v<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, a--). To calculate limx→<sup>a</sup> f(x)/v(x), observe that on (a, a0)

$$
\left(\frac{f}{v}\right)^2 = \left(\frac{f}{u}\right)^2 \frac{1}{H\_{a'}} \left(\frac{u}{v} H\_{a'}\right)^2 \frac{1}{H\_{a'}},\tag{6.10.23}
$$

which in view of (6.10.20) and (6.10.21) shows that

$$\lim\_{x \to a} \frac{f(x)}{v(x)} = 0.$$

(⇐) For the converse implication assume that <sup>f</sup> <sup>∈</sup> AC(a, a0), <sup>N</sup>v<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, a--), and limx→<sup>a</sup> f(x)/v(x) = 0. By means of Theorem 6.10.4 (ii) one can express u in terms of v as

$$u(x) = v(x)K\_a(x), \quad K\_a(x) = \int\_a^x \frac{ds}{p(s)v(s)^2} \, ds, \quad a < x < a''.$$

Now by Lemma 6.10.1 (ii) with P = pv2, ϕ = f /v, and α = a-one has

$$\frac{1}{\sqrt{p}v} \left( \frac{f}{v} \right) \frac{1}{K\_a} \in L^2(a, a''). \tag{6.10.24}$$

In order to show that the functions Nuf is square-integrable near a, write (6.10.16) in Lemma 6.10.8 as

$$N\_u f - N\_v f = \frac{1}{\sqrt{p}v} \left(\frac{f}{v}\right) \cdot \frac{v}{u} = \frac{1}{\sqrt{p}v} \left(\frac{f}{v}\right) \frac{1}{K\_a}.\tag{6.10.25}$$

It follows from (6.10.24) and (6.10.25) that <sup>N</sup>u<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, a--) and hence, in particular, <sup>N</sup>u<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, a- ) for a<a- ≤ a--.

(iv) Let u be a principal solution which does not vanish on (a, a0). Assume that <sup>f</sup> <sup>∈</sup> AC(a, a0) and <sup>N</sup>u<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, a- ), where a- = c ∈ (a, a0). Then it follows that (6.10.20) holds; cf. (iii). Recall that every solution v of (L − λ0)y = 0 which is nonprincipal at a has the form (6.10.6) with α = a- . Then v does not vanish on (a, av) ⊂ (a, a- ). Now observe that for f ∈ AC(a, a0) it then follows from (6.10.7) that for a<x<a<sup>v</sup>

$$\frac{f(x)^2}{u(x)^2} \left( \int\_a^x \frac{dt}{p(t)v(t)^2} \right) = \frac{f(x)^2}{u(x)^2} \frac{1}{\int\_x^{a'} \frac{dt}{p(t)u(t)^2}} \cdot \frac{\int\_x^{a'} \frac{dt}{p(t)u(t)^2}}{\gamma + \int\_x^{a'} \frac{dt}{p(t)u(t)^2}}.$$

Therefore, the right-hand side has the limit 0 as <sup>x</sup> <sup>→</sup> <sup>a</sup>. -

Let <sup>u</sup> and <sup>v</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are principal and nonprincipal at a, respectively. If the endpoint a is in the limit-circle case, then one can say more about the limits of f(x)/v(x) and f(x)/u(x) as x → a; cf. Theorem 6.10.9 (i) and (iv).

**Lemma 6.10.10.** Let (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> be nonoscillatory at <sup>a</sup> and assume that the endpoint a is in the limit-circle case. Let u and v be real solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 for <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> which do not vanish on (a, a0) and which are principal and nonprincipal at a, respectively, with W(u, v)=1. Let f, pf- ∈ AC(a, a0) and assume that f, Lf <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, a0). Then

$$\lim\_{x \to a} \frac{f(x)}{v(x)} = -\lim\_{x \to a} W\_x(f, u). \tag{6.10.26}$$

If, in addition,

$$\lim\_{x \to a} W\_x(f, u) \frac{v(x)}{u(x)} = 0,\tag{6.10.27}$$

then

$$\lim\_{x \to a} \frac{f(x)}{u(x)} = \lim\_{x \to a} W\_x(f, v). \tag{6.10.28}$$

Proof. Note that under the present conditions both limits

limx→<sup>a</sup> <sup>W</sup>x(f, u) and limx→<sup>a</sup> <sup>W</sup>x(f,v)

exist; cf. Lemma 6.2.5. Recall that v can be written in terms of u as in (6.10.6). Hence, for f ∈ AC(a, a0) Corollary 6.10.7 (i) shows that

$$W\_x(f, v) = W\_x(f, u) \frac{v(x)}{u(x)} + \frac{f(x)}{u(x)}, \quad a < x < \alpha. \tag{6.10.29}$$

Multiplying the identity (6.10.29) by <sup>u</sup>(x) <sup>v</sup>(x) one obtains

$$\frac{u(x)}{v(x)}\,W\_x(f,v) = W\_x(f,u) + \frac{f(x)}{v(x)},\quad a < x < \alpha.$$

Since, by Corollary 6.10.5, limx→<sup>a</sup> <sup>u</sup>(x) <sup>v</sup>(x) = 0, it follows that (6.10.26) holds. Furthermore, if (6.10.27) is satisfied, then (6.10.28) follows from (6.10.29). -

The following result is a slight variation of Theorem 6.10.9 (iii) in the context of the limit-circle case.

**Proposition 6.10.11.** Let (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> be nonoscillatory at <sup>a</sup> and assume that the endpoint a is in the limit-circle case. Let u and v be real solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> which are principal and nonprincipal at <sup>a</sup>, respectively, and do not vanish on (a, a0). Let f, pf- ∈ AC(a, a0) and assume that f, Lf <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, a0). Then the following statements are equivalent:


and in this case

$$\lim\_{x \to a} \frac{f(x)}{u(x)} = \lim\_{x \to a} W\_x(f, v).$$

Furthermore, if the endpoint a is regular, then (i)–(iv) are equivalent to

$$(\mathbf{v})\,\,\,f(a)=0.$$

Proof. Let u and v be principal and nonprincipal at a, respectively, and assume without loss of generality that W(u, v) = 1.

(i) <sup>⇒</sup> (ii) If <sup>N</sup>u<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, c), then Theorem 6.10.9 (iii) implies <sup>f</sup>(x)/v(x) <sup>→</sup> 0 as x → a.

(ii) ⇒ (iii) This follows from (6.10.26).

(iii) ⇒ (iv) Assume that limx→<sup>a</sup> Wx(f, u) = 0. Then the identity (6.1.9) shows that

$$W\_x(f, u) = \int\_a^x ((L - \lambda\_0)f)(t)u(t)r(t) \, dt,$$

which gives the estimate

$$|W\_x(f, u)| \le \left(\int\_a^x u(t)^2 r(t) \, dt\right)^{\frac{1}{2}} \left(\int\_a^x |((L - \lambda\_0)f)(t)|^2 r(t) \, dt\right)^{\frac{1}{2}}.$$

Combining this with the estimate (6.10.11) leads to

$$|W\_x(f,u)| \left| \frac{v(x)}{u(x)} \right| \le \left( \int\_a^x v(t)^2 r(t) \, dt \right)^{\frac{1}{2}} \left( \int\_a^x \left| ((L-\lambda\_0)f)(t) \right|^2 r(t) \, dt \right)^{\frac{1}{2}}$$

for x sufficiently close to a, so that

$$W\_x(f, u) \frac{\upsilon(x)}{u(x)} \to 0 \quad \text{as} \quad x \to a.$$

Therefore, limx→<sup>a</sup> f(x)/u(x) exists by Lemma 6.10.10.

(iv) ⇒ (i) Since lim<sup>x</sup>→<sup>a</sup> Wx(f, u) exists, the assumption that lim<sup>x</sup>→<sup>a</sup> f(x)/u(x) exists implies that <sup>N</sup>u<sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup>(a, c) for a<c<a0; cf. (6.9.6).

(iv) ⇔ (v) Observe that in the case of a regular endpoint one has v(a) = 0 for any nonprincipal solution v. This shows the equivalence of (ii) and (v). -

The importance of Theorem 6.10.9 and Proposition 6.10.11 will become clear in the treatment of the various forms associated with Sturm–Liouville expressions in Section 6.11 and Section 6.12. In fact, the dependence of the forms on the choice of nonprincipal and principal solutions will be indicated in Proposition 6.11.7 and Proposition 6.12.7.

## **6.11 Semibounded Sturm–Liouville operators and the limit-circle case**

Let L be the Sturm–Liouville differential expression in (6.1.1) on the open interval (a, b):

$$L = \frac{1}{r} \left[ -DpD + q \right], \quad D = d/dx,$$

and let the coefficient functions satisfy the conditions

$$\begin{cases} p(x) > 0, \ r(x) > 0, & \text{for almost all } x \in (a, b), \\ 1/p, q, r \in L\_{\text{loc}}^1(a, b). \end{cases} \tag{6.11.1}$$

In addition, it will be assumed that the equation (L−λ0)y = 0 is nonoscillatory at the endpoints <sup>a</sup> and <sup>b</sup> for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which implies that the minimal operator Tmin is bounded from below; cf. Theorem 6.9.6. Recall that if Tmin is semibounded from below, then in any case for every λ<sup>0</sup> < m(Tmin ) the equation (L − λ0)y = 0 is nonoscillatory; cf. Theorem 6.9.8. Furthermore, it will be assumed that the endpoints a and b are in the limit-circle case.

Let <sup>v</sup><sup>a</sup> and <sup>v</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and b, respectively. Recall that v<sup>a</sup> and v<sup>b</sup> are real by Definition 6.10.3. Since it is assumed that a and b are in the limit-circle case, one sees that v<sup>a</sup> and v<sup>b</sup> belong to L<sup>2</sup> <sup>r</sup>(a, b). These nonprincipal solutions v<sup>a</sup> and v<sup>b</sup> will be used to define a convenient boundary triplet.

**Proposition 6.11.1.** Let <sup>v</sup><sup>a</sup> and <sup>v</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and b, respectively. Assume that the endpoints a and b are in the limit-circle case. Then {C2, <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0 f = \begin{pmatrix}
\lim\_{x \to a} \frac{f(x)}{v\_a(x)} \\
\lim\_{x \to b} \frac{f(x)}{v\_b(x)}
\end{pmatrix} \quad \text{and} \quad \Gamma\_1 f = \begin{pmatrix}
\lim\_{x \to b} W\_x(f, v\_b)
\end{pmatrix}, \ f \in \text{dom}\, T\_{\text{max}},
$$

is a boundary triplet for (Tmin )<sup>∗</sup> = Tmax .

Proof. Let <sup>u</sup><sup>a</sup> and <sup>u</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are principal at a and b such that Wx(ua, va) = 1 and Wx(ub, vb) = 1, respectively; cf. Theorem 6.10.4. By Lemma 6.10.10,

$$\lim\_{x \to a} \frac{f(x)}{v\_a(x)} = -\lim\_{x \to a} W\_x(f, u\_a).$$

and in a similar way one obtains

$$\lim\_{x \to b} \frac{f(x)}{v\_b(x)} = -\lim\_{x \to a} W\_x(f, u\_b).$$

Hence, the claim is that for f ∈ dom Tmax the mappings

$$
\Gamma\_0 f = \begin{pmatrix} \lim\_{x \to a} W\_x(f, -u\_a) \\ \lim\_{x \to b} W\_x(f, -u\_b) \end{pmatrix} \quad \text{and} \quad \Gamma\_1 f = \begin{pmatrix} -\lim\_{x \to a} W\_x(f, v\_a) \\ \lim\_{x \to b} W\_x(f, v\_b) \end{pmatrix}
$$

define a boundary triplet for Tmax . To see this, observe that

$$W\_x(v\_a, -u\_a) = 1 \quad \text{and} \quad W\_x(v\_b, -u\_b) = 1,$$

and apply Proposition 6.3.8. -

Choose a<sup>0</sup> and b<sup>0</sup> such that v<sup>a</sup> does not vanish on (a, a0) and v<sup>b</sup> does not vanish on (b0, b), and let c, d be as in (6.9.9). By means of the solutions v<sup>a</sup> and v<sup>b</sup> of (L − λ0)y = 0, which are nonprincipal at a and b, one introduces the form t by

$$\begin{split} 4[f,g] &= \int\_{a}^{c} (N\_{v\_{a}}f)(x) \overline{(N\_{v\_{a}}g)(x)} \, dx + \int\_{d}^{b} (N\_{v\_{b}}f)(x) \overline{(N\_{v\_{b}}g)(x)} \, dx \\ &\quad + \lambda\_{0} \int\_{a}^{c} f(x) \overline{g(x)} r(x) \, dx + \lambda\_{0} \int\_{d}^{b} f(x) \overline{g(x)} r(x) \, dx \\ &\quad + \int\_{c}^{d} \left( (\sqrt{p}f')(x) \overline{(\sqrt{p}g')(x)} + q(x)f(x) \overline{g(x)} \right) dx \\ &\quad + \frac{(pv\_{a}^{t})(c)}{v\_{a}(c)} f(c) \overline{g(c)} - \frac{(pv\_{b}^{t})(d)}{v\_{b}(d)} f(d) \overline{g(d)} \end{split} \tag{6.11.2}$$

for f,g ∈ D, where

$$\mathfrak{D} = \left\{ f \in L^2\_r(a, b) : \begin{aligned} f &\in AC(a, b), \ \sqrt{p}f' \in L^2(c, d), \\ N\_{v\_a} f &\in L^2(a, c), \ N\_{v\_b} f \in L^2(d, b) \right\}; \end{aligned} \tag{6.11.3}$$

cf. (6.9.10)–(6.9.11). The next corollary follows from Lemma 6.9.4 and Theorem 6.9.6 with φ<sup>a</sup> = v<sup>a</sup> and φ<sup>b</sup> = vb.

**Corollary 6.11.2.** Let <sup>v</sup><sup>a</sup> and <sup>v</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and b, respectively. Assume that the endpoints a and b are in

the limit-circle case. Then t in (6.11.2)–(6.11.3) is a densely defined closed semibounded form in L<sup>2</sup> <sup>r</sup>(a, b). Moreover, if S<sup>1</sup> is the semibounded self-adjoint operator corresponding to t, then Tmin ⊂ S<sup>1</sup> and, in fact,

$$(T\_{\min}f,g)\_{L^2\_r(a,b)} = \mathfrak{t}[f,g]\_r$$

holds for all f ∈ dom Tmin ⊂ D and g ∈ D.

Now one shows that also dom Tmax ⊂ D. This property is important for the construction of a compatible boundary pair in Lemma 6.11.5.

**Lemma 6.11.3.** Let <sup>v</sup><sup>a</sup> and <sup>v</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and b, respectively. Assume that the endpoints a and b are in the limit-circle case. Then

$$\operatorname{dom} T\_{\max} \subset \mathfrak{D}.$$

Proof. Let f ∈ dom Tmax . Then f, pf- <sup>∈</sup> AC(a, b) and hence <sup>√</sup>pf- <sup>∈</sup> <sup>L</sup>2(c, d). It follows from (6.10.15) with φ = v<sup>a</sup> that

$$N\_{v\_a}f = -\frac{W(f, v\_a)}{\sqrt{p}v\_a}.$$

Since <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max and <sup>v</sup><sup>a</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b), Lemma 6.2.5 shows that limx→<sup>a</sup> Wx(f,va) exists. Hence, <sup>x</sup> → <sup>W</sup>x(f,va) is bounded in (a, c]. Consequently, <sup>N</sup>v<sup>a</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(a, c), as v<sup>a</sup> is nonprincipal at a and does not vanish in (a, a0). Likewise, it is clear that <sup>N</sup>v<sup>b</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(d, b). Hence, <sup>f</sup> <sup>∈</sup> <sup>D</sup>. -

Introduce the mapping Λ : <sup>D</sup> <sup>→</sup> <sup>C</sup><sup>2</sup> by

$$\Lambda f = \begin{pmatrix} \lim\_{x \to a} \frac{f(x)}{v\_a(x)} \\ \lim\_{x \to b} \frac{f(x)}{v\_b(x)} \end{pmatrix}, \quad f \in \mathfrak{D}. \tag{6.11.4}$$

Note that, by Theorem 6.10.9 (i), Λ is well defined.

**Lemma 6.11.4.** Let <sup>v</sup><sup>a</sup> and <sup>v</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and b, respectively. Assume that the endpoints a and b are in the limit-circle case, and let t be the form in (6.11.2)–(6.11.3). Then for every ε > 0 there exists D<sup>ε</sup> > 0 such that

$$\|\Lambda f\|\_{\mathbb{C}^2}^2 \le \varepsilon \,\mathfrak{t}[f] + D\_{\varepsilon} \|f\|\_{L^2\_r(a,b)}^2, \quad f \in \mathfrak{D}.$$

Proof. Choose ε > 0 and write the form t as the sum of

$$\begin{aligned} \mathfrak{A}\_{[c,d]}[f,g] &= \int\_c^d \left( (\sqrt{p}f')(x) \overline{(\sqrt{p}g')(x)} + q(x)f(x)\overline{g(x)} \right) dx \\ &+ \frac{(pv\_a)'(c)}{v\_a(c)} f(c) \overline{g(c)} - \frac{(pv\_b)'(d)}{v\_b(d)} f(d) \overline{g(d)}, \end{aligned}$$

$$\begin{aligned} \mathfrak{t}\_{(a,c)}[f,g] &= \int\_a^c (N\_{v\_a}f)(x) \overline{(N\_{v\_a}g)(x)} \, dx + \lambda\_0 \int\_a^c f(x) \overline{g(x)} r(x) \, dx, \\ \mathfrak{t}\_{(b,d)}[f,g] &= \int\_d^b (N\_{v\_b}f)(x) \overline{(N\_{v\_b}g)(x)} \, dx + \lambda\_0 \int\_d^b f(x) \overline{g(x)} r(x) \, dx. \end{aligned}$$

It has been shown in Lemma 6.9.4 that t is independent of the choice of c and d, and hence c ∈ (a, a0) and d ∈ (b0, b) can be chosen suitably close to a and b, respectively; see below. First observe that for f ∈ D and for all d ≤ x<b one has

$$\frac{f(x)}{v\_b(x)} = \frac{f(d)}{v\_b(d)} + \int\_d^x \left(\frac{f(t)}{v\_b(t)}\right)' dt = \frac{f(d)}{v\_b(d)} + \int\_d^x \frac{1}{\sqrt{p(t)}v\_b(t)} N\_{v\_b} f(t) \, dt$$

and hence

$$\begin{aligned} \left|\frac{f(x)}{v\_b(x)}\right|^2 &\le 2\left|\frac{f(d)}{v\_b(d)}\right|^2 + 2\int\_d^x \frac{1}{p(t)v\_b(t)^2} \, dt \int\_d^x |N\_{v\_b}f(t)|^2 \, dt \\ &\le 2\left|\frac{f(d)}{v\_b(d)}\right|^2 + 2\int\_d^b \frac{1}{p(t)v\_b(t)^2} \, dt \int\_d^b |N\_{v\_b}f(t)|^2 \, dt. \end{aligned}$$

Now choose d so close to b that

$$2\int\_{d}^{b} \frac{1}{p(t)v\_{b}(t)^{2}} \, dt \le \varepsilon,$$

so that now for all d ≤ x<b one has

$$\left|\frac{f(x)}{v\_b(x)}\right|^2 \le 2\left|\frac{f(d)}{v\_b(d)}\right|^2 + \varepsilon \int\_d^b |N\_{v\_b}f(t)|^2 \, dt.$$

Therefore,

$$\left| \lim\_{x \to b} \frac{f(x)}{v\_b(x)} \right|^2 \le 2 \left| \frac{f(d)}{v\_b(d)} \right|^2 + \varepsilon \int\_d^b |N\_{v\_b} f(t)|^2 \, dt \right| $$

for all f ∈ D. Similarly, one can choose c so close to a that

$$\left| \lim\_{x \to a} \frac{f(x)}{v\_a(x)} \right|^2 \le 2 \left| \frac{f(c)}{v\_a(c)} \right|^2 + \varepsilon \int\_a^c |N\_{v\_a} f(t)|^2 \, dt \right| $$

for all f ∈ D. An application of Corollary 6.8.6 shows that there exists C<sup>ε</sup> > 0 such that for all f ∈ D one has

$$\left|\frac{f(c)}{v\_a(c)}\right|^2 + \left|\frac{f(d)}{v\_b(d)}\right|^2 \le C\_\varepsilon \|f\|\_{L^2\_r(c,d)} + \varepsilon \mathfrak{t}\_{[c,d]}[f].$$

The assertion follows by combining the above inequalities. -

In order to apply the theory developed in Chapter 5 it will be shown that the map Λ in (6.11.4) leads to a boundary pair which is compatible with the boundary triplet {C<sup>2</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} in Proposition 6.11.1. As usual, the self-adjoint operator defined on ker Γ<sup>1</sup> is denoted by A1.

**Lemma 6.11.5.** Let <sup>v</sup><sup>a</sup> and <sup>v</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and b, respectively. Assume that the endpoints a and b are in the limit-circle case and let {C<sup>2</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Proposition 6.11.1. Then {C<sup>2</sup>,Λ} is a boundary pair for <sup>T</sup>min corresponding to <sup>S</sup><sup>1</sup> which is compatible with the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1}. Moreover, one has

$$(T\_{\max}f,g)\_{L^{2}\_{r}(a,b)} = (\Gamma\_{1}f,\Lambda g) + \mathfrak{t}[f,g], \quad f \in \text{dom}\,T\_{\max}, \ g \in \mathfrak{D}.\tag{6.11.5}$$

Proof. Consider the form t on dom t = D as in (6.11.2)–(6.11.3) and denote the corresponding semibounded self-adjoint operator in L<sup>2</sup> <sup>r</sup>(a, b) by S1; cf. Corollary 6.11.2. Let ε > 0 and D<sup>ε</sup> > 0 be as in Lemma 6.11.4. It follows from the estimate in Lemma 6.11.4 that for ρ<m(S1) there exists Cρ,ε > 0 such that

$$\|\|\Lambda f\|\|\_{\mathbb{C}^2}^2 \le D\_{\varepsilon} \|f\|\|\_{L^2\_r(a,b)}^2 + \varepsilon \mathfrak{t}[f,f] \le C\_{\rho,\varepsilon} \|f\|\|\_{\mathfrak{t}\_{\mathcal{S}\_1}-\rho}^2$$

for all <sup>f</sup> <sup>∈</sup> <sup>D</sup>. Therefore, Λ <sup>∈</sup> **<sup>B</sup>**(HtS1−ρ, <sup>C</sup>2). Moreover, according to Lemma 6.11.3 one has dom Tmax ⊂ D and hence Λ is an extension of the boundary mapping Γ<sup>0</sup> in Proposition 6.11.1. Now Lemma 5.6.5 implies that {C2,Λ} is a boundary pair for Tmin corresponding to S1.

In order to conclude that {C2,Λ} and {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} are compatible, it remains to show that A<sup>1</sup> = S1, where A<sup>1</sup> is the self-adjoint operator defined on ker Γ1. In fact, since dom Tmax ⊂ D, the Green formula (6.9.15) is valid for f ∈ dom Tmax and g ∈ D,

$$\begin{aligned} (T\_{\max}f,g)\_{L^2\_r(a,b)} &= \mathfrak{t}[f,g] + \lim\_{b' \to b} W\_{b'}(f,v\_b) \left(\frac{\overline{g}}{v\_b}\right)(b') \\ &- \lim\_{a' \to a} W\_{a'}(f,v\_a) \left(\frac{\overline{g}}{v\_a}\right)(a'), \end{aligned}$$

which, in the present context, is equivalent to (6.11.5). Hence,

$$(A\_1 f, g)\_{L^2\_r(a, b)} = \mathfrak{t}[f, g]$$

for all f ∈ dom A<sup>1</sup> and g ∈ dom t. As A<sup>1</sup> is self-adjoint, the first representation theorem implies A<sup>1</sup> = S1. -

Recall that by means of the boundary triplet in Proposition 6.11.1, all selfadjoint extensions of Tmin are in a one-to-one correspondence to the self-adjoint relations Θ in C<sup>2</sup> via

$$\operatorname{dom} A\_{\Theta} = \left\{ f \in \operatorname{dom} T\_{\max} \, : \, \{ \Gamma\_0 f, \Gamma\_1 f \} \in \Theta \right\}. \tag{6.11.6}$$

The next result, which is an immediate consequence of Theorem 5.6.13 and Corollary 5.6.14, makes use of the compatible boundary pair in Lemma 6.11.5 and provides a characterization of all closed semibounded forms associated with the semibounded self-adjoint extensions AΘ.

**Theorem 6.11.6.** Let {C<sup>2</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Proposition 6.11.1, let Θ be a self-adjoint relation in C<sup>2</sup>, and let A<sup>Θ</sup> be the corresponding self-adjoint restriction of Tmax in (6.11.6). Then A<sup>Θ</sup> is semibounded from below and the corresponding densely defined closed semibounded form t<sup>Θ</sup> in L<sup>2</sup> <sup>r</sup>(a, b) such that

$$(A\_{\Theta}f,g)\_{L^{2}\_{r}(a,b)} = \mathfrak{t}\_{\Theta}[f,g], \quad f \in \text{dom}\,A\_{\Theta}, \ g \in \text{dom}\,\mathfrak{t}\_{\Theta},$$

is given in terms of t in (6.11.2)–(6.11.3), and Λ in (6.11.4) as follows:

(i) If Θ is a symmetric 2 × 2-matrix, then

$$\mathfrak{t}\_{\Theta}[f,g] = \mathfrak{t}[f,g] + \left(\Theta \Lambda f, \Lambda g\right), \quad \text{dom}\,\mathfrak{t}\_{\Theta} = \mathfrak{D}.$$

(ii) If Θ=Θop <sup>⊕</sup> <sup>Θ</sup>mul with respect to the decomposition <sup>C</sup><sup>2</sup> = dom Θop <sup>⊕</sup> mul Θ and dim dom Θop = 1, then

$$\mathfrak{t}\_{\Theta}[f,g] = \mathfrak{t}[f,g] + \left(\Theta\_{\mathrm{op}}\Lambda f, \Lambda g\right), \quad \mathrm{dom}\,\mathfrak{t}\_{\Theta} = \left\{ h \in \mathfrak{D} : \Lambda h \in \mathrm{dom}\,\Theta\_{\mathrm{op}} \right\}.$$

(iii) If Θ = {0} × <sup>C</sup>2, then <sup>A</sup><sup>Θ</sup> <sup>=</sup> <sup>A</sup><sup>0</sup> coincides with the Friedrichs extension <sup>S</sup><sup>F</sup> and

> tΘ[f,g] = t[f,g], dom t<sup>Θ</sup> = - h ∈ D : Λh = 0 .

The above description in Theorem 6.11.6 of the (automatically) semibounded self-adjoint extensions of Tmin is in terms of the corresponding closed semibounded forms via the first representation theorem in Section 5.1, see also Theorem 5.6.13. The boundary triplet and the compatible boundary pair are provided by the choice of the solutions v<sup>a</sup> and v<sup>b</sup> of (L − λ0)y = 0, which are nonprincipal at a and b, respectively; cf. Proposition 6.11.1.

It will now be shown what the results look like when there is a different choice of nonoscillatory solutions. First let w<sup>a</sup> and w<sup>b</sup> be nonprincipal solutions of (L − λ0)y = 0 at a and b, respectively. Assume that v<sup>a</sup> and w<sup>a</sup> do not vanish on (a, a0) and that v<sup>b</sup> and w<sup>b</sup> do not vanish on (b0, b). Denote the form generated by the solutions w<sup>a</sup> and w<sup>b</sup> by t - ; cf. (6.11.2)–(6.11.3). Then according to (ii) in Theorem 6.10.9, dom t - = dom t = D. To describe t - , let u<sup>a</sup> and u<sup>b</sup> be solutions of (L − λ0)y = 0 which are principal at a and b, respectively, and which satisfy W(ua, va) = 1 and W(ub, vb) = 1; cf. Theorem 6.10.4. Then clearly

$$w\_a = \alpha\_a v\_a + \beta\_a u\_a \quad \text{and} \quad w\_b = \alpha\_b v\_b + \beta\_b u\_b$$

for some <sup>α</sup>a, βa, αb, β<sup>b</sup> <sup>∈</sup> <sup>R</sup>, where <sup>α</sup>a, α<sup>b</sup> are different from zero. Denote the boundary triplet generated by <sup>w</sup><sup>a</sup> and <sup>w</sup><sup>b</sup> by {C2, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} and let {C2,Λ- } be the corresponding boundary pair; cf. Proposition 6.11.1 and (6.11.4).

**Proposition 6.11.7.** The boundary triplet {C<sup>2</sup>, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} and the boundary pair {C<sup>2</sup>,Λ- } generated by the nonprincipal solutions w<sup>a</sup> and w<sup>b</sup> are given by

$$
\Lambda' f = \begin{pmatrix} \frac{1}{\alpha\_a} & 0 \\ 0 & \frac{1}{\alpha\_b} \end{pmatrix} \Lambda f, \quad f \in \mathfrak{D},
$$

and

$$
\Gamma\_1' f = \begin{pmatrix} \alpha\_a & 0 \\ 0 & \alpha\_b \end{pmatrix} \Gamma\_1 f + \begin{pmatrix} \beta\_a & 0 \\ 0 & -\beta\_b \end{pmatrix} \Lambda f, \quad f \in \text{dom } T\_{\text{max}} \dots
$$

Moreover, the form t coincides with t<sup>Θ</sup> as in Theorem 6.11.6, where the self-adjoint matrix Θ is given by

$$
\Theta = \begin{pmatrix} -\frac{\beta\_a}{\alpha\_a} & 0 \\ 0 & \frac{\beta\_b}{\alpha\_b} \end{pmatrix}.
$$

Proof. The following observations are for the endpoint a; the results are similar for the endpoint b. Since by Corollary 6.10.5 ua(x)/va(x) → 0 as x → a, one has

$$\lim\_{x \to a} \frac{f(x)}{w\_a(x)} = \lim\_{x \to a} \frac{f(x)}{\alpha\_a v\_a(x)} \cdot \frac{1}{1 + \frac{\beta\_a}{\alpha\_a} \frac{u\_a(x)}{v\_a(x)}} = \frac{1}{\alpha\_a} \lim\_{x \to a} \frac{f(x)}{v\_a(x)}.$$

Furthermore, it is clear that

$$W(f, w\_a) = \alpha\_a W(f, v\_a) + \beta\_a W(f, u\_a)$$

and recall that

$$\lim\_{x \to a} W\_x(f, u\_a) = -\lim\_{x \to a} \frac{f(x)}{v\_a(x)};$$

cf. Lemma 6.10.10. Hence, one sees that

$$\lim\_{x \to a} W\_x(f, w\_a) = \alpha\_a \lim\_{x \to a} W\_x(f, v\_a) - \beta\_a \lim\_{x \to a} \frac{f(x)}{v\_a(x)}.$$

Thus, the results for the boundary triplet and boundary pair follow directly from Proposition 6.11.1 and (6.11.4). Analogous to the Green formula (6.11.5) one has

$$\begin{aligned} (T\_{\max}f,g)\_{L^2\_r(a,b)} &= (\Gamma'\_1 f, \Lambda' g) + \mathfrak{t}'[f,g] \\ &= (\Gamma\_1 f, \Lambda g) + \left( \begin{pmatrix} \frac{\beta\_a}{\alpha\_a} & 0 \\ 0 & -\frac{\beta\_b}{\alpha\_b} \end{pmatrix} \Lambda f, \Lambda g \right) + \mathfrak{t}'[f,g] \end{aligned}$$

for f ∈ dom Tmax and g ∈ D. Comparison of the right-hand sides gives

$$\mathfrak{t}[f,g] = \begin{pmatrix} \begin{pmatrix} \frac{\beta\_a}{\alpha\_a} & 0\\ 0 & -\frac{\beta\_b}{\alpha\_b} \end{pmatrix} \Lambda f, \Lambda g \end{pmatrix} + \mathfrak{t}'[f,g],$$

for f ∈ dom Tmax and g ∈ D. It is easily seen that in fact the last identity holds for f,g <sup>∈</sup> <sup>D</sup>, which completes the proof. -

Next let u<sup>a</sup> and u<sup>b</sup> be nontrivial solutions of (L−λ0)y = 0 which are principal at a and b, respectively, and assume that u<sup>a</sup> does not vanish on (a, a0) and that u<sup>b</sup> does not vanish on (b0, b). Denote the form generated by the solutions u<sup>a</sup> and <sup>u</sup><sup>b</sup> by t; cf. Theorem 6.9.6.

**Proposition 6.11.8.** Let the form <sup>t</sup> be generated by the solutions <sup>u</sup><sup>a</sup> and <sup>u</sup><sup>b</sup> of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0 which are principal at <sup>a</sup> and <sup>b</sup>, respectively. Then <sup>t</sup> coincides with <sup>t</sup><sup>Θ</sup> as in Theorem 6.11.6, where Θ = {0} × <sup>C</sup><sup>2</sup> or, equivalently, <sup>t</sup> is the form generated by the Friedrichs extension

$$\mathfrak{t}\_{S^p\_{\mathbb{P}}} = \bar{\mathfrak{t}}\_{\mathbb{P}}$$

Proof. Recall from Theorem 6.9.6 that

$$\mathfrak{t}\_{\mathrm{Sp}} \subset \widetilde{\mathfrak{t}}.\tag{6.11.7}$$

Furthermore, let t be the form in (6.11.2) defined on D in (6.11.3) generated by the solutions v<sup>a</sup> and v<sup>b</sup> of (L − λ0)y = 0 which are nonprincipal at a and b, respectively. Now consider f ∈ D and observe that, by Theorem 6.10.9 (iii), Nu<sup>a</sup> f is square-integrable at a if and only if Nv<sup>a</sup> f is square-integrable at a and limx→<sup>a</sup> f(x)/va(x) = 0; an analogous statement holds at the endpoint b. Therefore, <sup>f</sup> <sup>∈</sup> dom<sup>t</sup> if and only if <sup>f</sup> <sup>∈</sup> dom <sup>t</sup> and

$$\lim\_{x \to a} \frac{f(x)}{v\_a(x)} = 0 \quad \text{and} \quad \lim\_{x \to b} \frac{f(x)}{v\_b(x)} = 0. \tag{6.11.8}$$

Consider the boundary pair {C2,Λ} in Lemma 6.11.5 with the boundary map Λ in (6.11.4). Then (6.11.8) is equivalent to f ∈ ker Λ and it follows that

$$
\ker \Lambda = \text{dom } \mathfrak{t}\_{\mathcal{S}\_{\mathcal{F}}} \subset \text{dom } \mathfrak{t} = \text{dom } \mathfrak{t} \cap \ker \Lambda \subset \ker \Lambda.
$$

Hence, dom <sup>t</sup>S<sup>F</sup> = dom<sup>t</sup> and from (6.11.7) one concludes <sup>t</sup>S<sup>F</sup> <sup>=</sup> t. -

For the sake of completeness the following equivalent characterizations of the Friedrichs extension of Tmin are mentioned explicitly; cf. Proposition 6.10.11.

**Corollary 6.11.9.** Let <sup>v</sup><sup>a</sup> and <sup>v</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at <sup>a</sup> and <sup>b</sup>, and let <sup>u</sup><sup>a</sup> and <sup>u</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are principal at a and b. Assume that the endpoints a and b are in the limitcircle case. Then f ∈ dom Tmax is in the domain of the Friedrichs extension S<sup>F</sup> of Tmin if and only if one of the following equivalent conditions holds:


In the next remark the case of a regular endpoint is briefly discussed.

**Remark 6.11.10.** The considerations in this section simplify if one endpoint, say <sup>a</sup>, is regular. In that case one can choose the boundary triplet {C<sup>2</sup>, <sup>Γ</sup>0, <sup>Γ</sup>1} where

$$
\Gamma\_0 f = \begin{pmatrix} f(a) \\ \lim\_{x \to b} \frac{f(x)}{v\_b(x)} \end{pmatrix} \quad \text{and} \quad \Gamma\_1 f = \begin{pmatrix} (pf')(a) \\ \lim\_{x \to b} W\_x(f, v\_b) \end{pmatrix}, \ f \in \text{dom}\, T\_{\text{max}};
$$

cf. (6.9.21). The form t and D in (6.11.2) and (6.11.3) reduce to (6.9.19) and (6.9.20) with φ<sup>b</sup> = v<sup>b</sup> in Remark 6.9.9, respectively. The corresponding boundary pair Λ : <sup>D</sup> <sup>→</sup> <sup>C</sup><sup>2</sup> in (6.11.4) has the form

$$\Lambda f = \begin{pmatrix} f(a) \\ \lim\_{x \to b} \frac{f(x)}{v\_b(x)} \end{pmatrix}, \quad f \in \mathfrak{D}.$$

## **6.12 Semibounded Sturm–Liouville operators and the limit-point case**

Let L be the Sturm–Liouville differential expression in (6.1.1) on the open interval (a, b):

$$L = \frac{1}{r} \left[ -DpD + q \right], \quad D = d/dx, q$$

and let the coefficient functions satisfy the conditions (6.11.1). In addition, it will be assumed that the equation (L − λ0)y = 0 is nonoscillatory at the endpoints a and <sup>b</sup> for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>; cf. Theorem 6.9.6 and Theorem 6.9.8. Let <sup>v</sup><sup>a</sup> be a solution of (L−λ0)y = 0 which is nonprincipal at a and let u<sup>b</sup> be a solution of (L−λ0)y = 0 which is principal at b. Furthermore, it will be assumed that the endpoint a is in the limit-circle case and that <sup>b</sup> is in the limit-point case. Hence, <sup>v</sup><sup>a</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, a- ) for a<a-< b.

**Proposition 6.12.1.** Assume that the endpoint a is in the limit-circle case and that the endpoint b is in the limit-point case. Let v<sup>a</sup> be a solution of the equation (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which is nonprincipal at <sup>a</sup>. Then {C, <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$\Gamma\_0 f = \lim\_{x \to a} \frac{f(x)}{v\_a(x)} \quad \text{and} \quad \Gamma\_1 f = -\lim\_{x \to a} W\_x(f, v\_a), \quad f \in \text{dom}\, T\_{\text{max}}\,, \tag{6.12.1}$$

is a boundary triplet for (Tmin )<sup>∗</sup> = Tmax .

Proof. Let <sup>u</sup><sup>a</sup> be a solution of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which is principal at <sup>a</sup> such that Wx(ua, va) = 1; cf. Theorem 6.10.4. By Lemma 6.10.10, one has that

$$\lim\_{x \to a} \frac{f(x)}{v\_a(x)} = -\lim\_{x \to a} W\_x(f, u\_a),$$

and hence the claim is that for f ∈ dom Tmax the mappings

$$
\Gamma\_0 f = \lim\_{x \to a} W\_x(f, -u\_a) \quad \text{and} \quad \Gamma\_1 f = -\lim\_{x \to a} W\_x(f, v\_a).
$$

define a boundary triplet for Tmax . To see this, note that Wx(va, −ua) = 1 and apply Proposition 6.4.9. -

Choose a<sup>0</sup> and b<sup>0</sup> such that v<sup>a</sup> does not vanish on (a, a0) and u<sup>b</sup> does not vanish on (b0, b), and let c, d be as in (6.9.9). By means of the solutions v<sup>a</sup> and u<sup>b</sup> of (L − λ0)y = 0 the following form t will be considered:

$$\begin{split} 4[f,g] &= \int\_{a}^{c} (N\_{v\_{a}}f)(x) \overline{(N\_{v\_{a}}g)(x)} \, dx + \int\_{d}^{b} (N\_{u\_{b}}f)(x) \overline{(N\_{u}g)(x)} \, dx \\ &\quad + \lambda\_{0} \int\_{a}^{c} f(x) \overline{g(x)} r(x) \, dx + \lambda\_{0} \int\_{d}^{b} f(x) \overline{g(x)} r(x) \, dx \\ &\quad + \int\_{c}^{d} \left( (\sqrt{p}f')(x) \overline{(\sqrt{p}g')(x)} + q(x)f(x) \overline{g(x)} \right) dx \\ &\quad + \frac{(pv\_{a}')(c)}{v\_{a}(c)} f(c) \overline{g(c)} - \frac{(pu\_{b}')(d)}{u\_{b}(d)} f(d) \overline{g(d)} \end{split} \tag{6.12.2}$$

for f,g ∈ D, where the domain of definition D is given by

$$\mathfrak{D} = \left\{ f \in L^2\_r(a, b) : \begin{aligned} f &\in AC(a, b), \ \sqrt{p}f' \in L^2(c, d), \\ N\_{v\_a} f &\in L^2(a, c), \ N\_{u\_b} f \in L^2(d, b) \right\}. \end{aligned} \tag{6.12.3}$$

The next corollary is the counterpart of Corollary 6.11.2 in the present situation; it follows from Lemma 6.9.4 and Theorem 6.9.6 with φ<sup>a</sup> = v<sup>a</sup> and φ<sup>b</sup> = ub.

**Corollary 6.12.2.** Let <sup>v</sup><sup>a</sup> and <sup>u</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and principal at b, respectively. Assume that the endpoint a is in the limit-circle case and the endpoint b is in the limit-point case. Then t in (6.12.2)–(6.12.3) is a densely defined closed semibounded form in L<sup>2</sup> <sup>r</sup>(a, b). Moreover, if S<sup>1</sup> is the semibounded self-adjoint operator corresponding to t, then Tmin ⊂ S<sup>1</sup> and, in fact,

(Tmin f,g)L<sup>2</sup> <sup>r</sup>(a,b) = t[f,g]

holds for all f ∈ dom Tmin ⊂ D and g ∈ D.

As in Lemma 6.11.3 one has dom Tmax ⊂ D when the endpoint b is in the limit-point case.

**Lemma 6.12.3.** Let <sup>v</sup><sup>a</sup> and <sup>u</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and principal at b, respectively. Assume that the endpoint a is in the limit-circle case and the endpoint b is in the limit-point case. Then

$$\operatorname{dom} T\_{\max} \subset \mathfrak{D}.$$

Proof. Let f ∈ dom Tmax . Then f, pf- <sup>∈</sup> AC(a, b), and hence <sup>√</sup>pf- <sup>∈</sup> <sup>L</sup><sup>2</sup>(c, d). It follows as in the proof of Lemma 6.11.3 that <sup>N</sup><sup>v</sup><sup>a</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup>(a, c), as <sup>v</sup><sup>a</sup> is nonprincipal at a. For the behavior at b of N<sup>u</sup><sup>b</sup> f decompose f ∈ dom Tmax as

$$f = f\_0 + h,$$

where f<sup>0</sup> ∈ dom Tmin and h ∈ dom Tmax is a function which vanishes in a neighborhood of b, say (b- , b); cf. the proof of Proposition 6.4.9. Since dom Tmin ⊂ D by Corollary 6.12.2 it is clear that <sup>N</sup><sup>u</sup><sup>b</sup> <sup>f</sup><sup>0</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup>(d, b). It follows from (6.10.15) with φ = u<sup>b</sup> that

$$N\_{u\_b}h = -\frac{W(h, u\_b)}{\sqrt{p}u\_b}.$$

Since h vanishes on (b- , b), it follows that Nu<sup>b</sup> h vanishes on (b- , b), while on [d, b- ] the function x → Wx(h, ub) is bounded and the function u<sup>b</sup> does not vanish. Consequently, one sees that <sup>N</sup>u<sup>b</sup> <sup>h</sup> <sup>∈</sup> <sup>L</sup>2(d, b) and thus <sup>N</sup>u<sup>b</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(d, b). Hence, it follows that <sup>f</sup> <sup>∈</sup> <sup>D</sup>. -

Let the mapping Λ : <sup>D</sup> <sup>→</sup> <sup>C</sup> be defined by

$$\Lambda f = \lim\_{x \to a} \frac{f(x)}{v\_a(x)}, \quad f \in \mathfrak{D}. \tag{6.12.4}$$

Note that Λ is well defined by Theorem 6.10.9 (i).

**Lemma 6.12.4.** Let <sup>v</sup><sup>a</sup> and <sup>u</sup><sup>b</sup> be real solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and principal at b, respectively. Assume that the endpoint a is in the limit-circle case and the endpoint b is in the limit-point case. Let t be the form in (6.12.2)–(6.12.3). Then for every ε > 0 there exists D<sup>ε</sup> > 0 such that

$$|\Lambda f|^2 \le \varepsilon \operatorname{t} [f] + D\_{\varepsilon} ||f||\_{L^2\_r(a,b)}^2, \quad f \in \mathfrak{D}.$$

Proof. Choose ε > 0 and write the form t as the sum of

$$\begin{split} \mathsf{t}\_{[c,d]}[f,g] &= \int\_{c}^{d} \left( (\sqrt{p}f')(x) \overline{(\sqrt{p}g')(x)} + q(x)f(x)\overline{g(x)} \right) dx \\ &+ \frac{(pv\_a)'(c)}{v\_a(c)} f(c) \overline{g(c)} - \frac{(pv\_b)'(d)}{u\_b(d)} f(d) \overline{g(d)}, \\ \mathsf{t}\_{[a,c]}[f,g] &= \int\_{a}^{c} (N\_{v\_a}f)(x) \overline{(N\_{v\_a}g)(x)} \, dx + \lambda\_0 \int\_{a}^{c} f(x) \overline{g(x)} r(x) \, dx, \\ \mathsf{t}\_{[b,d]}[f,g] &= \int\_{d}^{b} (N\_{u\_b}f)(x) \overline{(N\_{u\_b}g)(x)} \, dx + \lambda\_0 \int\_{d}^{b} f(x) \overline{g(x)} r(x) \, dx. \end{split}$$

It has been shown in Lemma 6.9.4 that t is independent of the choice of c and d, and hence c ∈ (a, a0) and d ∈ (b0, b) can be chosen suitably close to a and b, respectively. In fact, choose c so close to a that

$$2\int\_{a}^{c} \frac{1}{p(t)v\_{a}(t)^{2}} \, dt \le \varepsilon.$$

Then, as in the proof of Lemma 6.11.4, it follows that for all f ∈ D

$$\left| \lim\_{x \to a} \frac{f(x)}{v\_a(x)} \right|^2 \le 2 \left| \frac{f(c)}{v\_a(c)} \right|^2 + \varepsilon \int\_a^c |N\_{v\_a} f(t)|^2 \, dt \, dt$$

Consequently, with the above choice of c and any choice of b<sup>0</sup> <d<b one sees that for all f ∈ D

$$\begin{split} \left| \lim\_{x \to a} \frac{f(x)}{v\_a(x)} \right|^2 &\leq 2 \left| \frac{f(c)}{v\_a(c)} \right|^2 + 2 \left| \frac{f(d)}{u\_b(d)} \right|^2 \\ &\quad + \varepsilon \left( \int\_a^c |N\_{v\_a} f(t)|^2 \, dt + \int\_d^b |N\_{u\_b} f(t)|^2 \, dt \right). \end{split}$$

As in the proof of Lemma 6.11.4, an application of Corollary 6.8.6 shows that there exists C<sup>ε</sup> > 0 such that for all f ∈ D one has

$$\left|\frac{f(c)}{v\_a(c)}\right|^2 + \left|\frac{f(d)}{v\_b(d)}\right|^2 \le C\_\varepsilon \|f\|\_{L^2\_r(c,d)} + \varepsilon \mathfrak{t}\_{[c,d]}[f].$$

The assertion follows by combining the above inequalities. -

The following lemma is the counterpart of Lemma 6.8.4 and Lemma 6.11.5 in the present situation.

**Lemma 6.12.5.** Let <sup>v</sup><sup>a</sup> and <sup>u</sup><sup>b</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal at a and principal at b, respectively. Assume that the endpoint a is in the limit-circle case and the endpoint <sup>b</sup> is in the limit-point case, and let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Proposition 6.12.1. Then {C,Λ} is a boundary pair for <sup>T</sup>min corresponding to <sup>S</sup><sup>1</sup> which is compatible with the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1}. Moreover, one has

$$(T\_{\max}f,g)\_{L^{2}\_{r}(a,b)} = (\Gamma\_{1}f,\Lambda g) + \mathfrak{t}[f,g], \quad f \in \text{dom}\,T\_{\max}, \ g \in \mathfrak{D}.\tag{6.12.5}$$

Proof. Consider the form t defined on dom t = D as in (6.12.2)–(6.12.3) and denote the corresponding semibounded self-adjoint operator in L<sup>2</sup> <sup>r</sup>(a, b) by S1; cf. Corollary 6.12.2. Let ε > 0 and D<sup>ε</sup> > 0 be as in Lemma 6.12.4. It follows from the estimate in Lemma 6.12.4 that for ρ<m(S1) there exists Cρ,ε > 0 such that

$$|\Lambda f|^2 \le D\_{\varepsilon} \|f\|\_{L^2\_r(a,b)}^2 + \varepsilon \mathfrak{t}[f] \le C\_{\rho,\varepsilon} \|f\|\_{\mathfrak{t}\_{\mathcal{S}\_1}-\rho}^2$$

for all <sup>f</sup> <sup>∈</sup> <sup>D</sup>. Therefore, Λ <sup>∈</sup> **<sup>B</sup>**(HtS1−ρ, <sup>C</sup>). Moreover, by Lemma 6.12.3, one has dom Tmax ⊂ D and hence Λ is an extension of the boundary mapping Γ<sup>0</sup> in (6.12.1). Now Lemma 5.6.5 implies that {C,Λ} is a boundary pair for <sup>T</sup>min corresponding to S1.

In order to conclude that {C,Λ} and {C, <sup>Γ</sup>0, <sup>Γ</sup>1} are compatible it remains to show that A<sup>1</sup> = S1, where A<sup>1</sup> is the self-adjoint operator defined on ker Γ1. In

fact, due to dom Tmax ⊂ D the Green formula (6.9.15) is valid for f ∈ dom Tmax and g ∈ D

$$\begin{aligned} (T\_{\max}f,g)\_{L^2\_r(a,b)} &= \mathfrak{t}[f,g] + \lim\_{b' \to b} W\_{b'}(f,u\_b) \left(\frac{\overline{g}}{u\_b}\right)(b') \\ &- \lim\_{a' \to a} W\_{a'}(f,v\_a) \left(\frac{\overline{g}}{v\_a}\right)(a'), \end{aligned}$$

where each of the limits exist. Now observe that

$$W(f, u\_b) \left(\frac{\overline{g}}{u\_b}\right) = -p u\_b^2 \left(\frac{f}{u\_b}\right)' \left(\frac{\overline{g}}{u\_b}\right);$$

cf. (6.10.15). Since <sup>u</sup><sup>b</sup> is principal at <sup>b</sup> and <sup>N</sup>u<sup>b</sup> f,Nu<sup>b</sup> <sup>g</sup> <sup>∈</sup> <sup>L</sup>2(d, b) for <sup>b</sup><sup>0</sup> <d<b, it follows from Corollary 6.10.2 (applied to the endpoint b) with P = pu<sup>2</sup> <sup>b</sup> , ϕ = f /ub, and ψ = g/ub, that

$$\lim\_{b' \to b} W\_{b'}(f, u\_b) \left( \frac{\overline{g}}{u\_b} \right) (b') = \liminf\_{b' \to b} W\_{b'}(f, u\_b) \left( \frac{\overline{g}}{u\_b} \right) (b') = 0.1$$

Thus, in the present context, it follows that (6.12.5) holds. Hence,

$$(A\_1 f, g)\_{L^2\_r(a, b)} = \mathfrak{t}[f, g]$$

holds for all f ∈ dom A<sup>1</sup> and g ∈ dom t. As A<sup>1</sup> is self-adjoint the first representation theorem implies A<sup>1</sup> = S1. -

Recall that by means of the boundary triplet in Proposition 6.12.1, all the self-adjoint extensions of <sup>T</sup>min are in a one-to-one correspondence to <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞} via

$$\text{dom}\,A\_{\tau} = \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, \Gamma\_1 f = \tau \Gamma\_0 f \right\},\tag{6.12.6}$$

where in case τ = ∞ one means Γ0f = 0. The next result, which is an immediate consequence of Theorem 5.6.13 and Corollary 5.6.14, makes use of the compatible boundary pair in Lemma 6.12.5 and provides a characterization of all closed semibounded forms associated with the semibounded self-adjoint extensions A<sup>τ</sup> . The boundary triplet and the compatible boundary pair are provided by the choice of the solution v<sup>a</sup> of (L − λ0)y = 0, which is nonprincipal at a.

**Theorem 6.12.6.** Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Proposition 6.12.1, let <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞}, and let <sup>A</sup><sup>τ</sup> be the corresponding self-adjoint restriction of <sup>T</sup>max in (6.12.6). Then A<sup>τ</sup> is semibounded from below and the corresponding densely defined closed semibounded form t<sup>τ</sup> in L<sup>2</sup> <sup>r</sup>(a, b) such that

$$(A\_\tau f, g)\_{L^2\_r(a, b)} = \mathfrak{t}\_\tau[f, g], \quad f \in \text{dom}\, A\_\tau, \ g \in \text{dom}\, \mathfrak{t}\_\tau,$$

is given in terms of t in (6.12.2)–(6.12.3), and Λ in (6.12.4) as follows:

(i) If <sup>τ</sup> <sup>∈</sup> <sup>R</sup>, then

$$\mathfrak{t}\_{\tau}[f,g] = \mathfrak{t}[f,g] + \tau(\Lambda f, \Lambda g), \quad \text{dom } \mathfrak{t}\_{\tau} = \mathfrak{D}.$$

$$\text{(ii) } \textit{ If } \tau = \infty, \text{ then } A\_{\tau} = A\_0 \text{ coincides with the Friedrichs extension } S\_{\sf F} \text{ and}$$

$$\mathfrak{t}\_{\tau}[f,g] = \mathfrak{t}[f,g], \quad \text{dom}\,\mathfrak{t}\_{\tau} = \left\{ h \in \mathfrak{D} : \Lambda h = 0 \right\}.$$

In the same way as in the previous section it will be discussed briefly what the results look like when there is a different choice for the nonoscillatory solution. Let w<sup>a</sup> be a solution of (L − λ0)y = 0 which is nonprincipal at a and assume that v<sup>a</sup> and w<sup>a</sup> do not vanish on (a, a0). Denote the form generated by the solutions w<sup>a</sup> and u<sup>b</sup> by t and let u<sup>a</sup> be a solution of (L − λ0)y = 0 which is principal at a and which satisfies W(ua, va) = 1. Then one has dom t -= dom t = D and

$$w\_a = \alpha\_a v\_a + \beta\_a u\_a$$

for some <sup>α</sup>a, β<sup>a</sup> <sup>∈</sup> <sup>R</sup>, where <sup>α</sup><sup>a</sup> = 0. Denote the boundary triplet generated by <sup>w</sup><sup>a</sup> by {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} and let {C,Λ- } be the corresponding boundary pair; cf. Proposition 6.12.1 and (6.12.4). Then the following result is clear; cf. Proposition 6.11.7.

**Proposition 6.12.7.** The boundary triplet {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} and the boundary pair {C,Λ- } generated by the nonprincipal solution w<sup>a</sup> are given by

$$
\Lambda' f = \frac{1}{\alpha\_a} \Lambda f, \quad f \in \mathfrak{D},
$$

and

$$
\Gamma\_1' f = \alpha\_a \,\Gamma\_1 f + \beta\_a \,\Lambda f, \quad f \in \text{dom}\, T\_{\text{max}}\,.
$$

Moreover, the form t coincides with <sup>t</sup><sup>τ</sup> as in Theorem 6.12.6, where <sup>τ</sup> <sup>∈</sup> <sup>R</sup> is given by

$$
\tau = -\frac{\beta\_a}{\alpha\_a}.
$$

Next let u<sup>a</sup> and u<sup>b</sup> be nontrivial solutions of (L−λ0)y = 0 which are principal at a and b, respectively, and assume that u<sup>a</sup> does not vanish on (a, a0) and that u<sup>b</sup> does not vanish on (b0, b). Denote the form generated by the solutions u<sup>a</sup> and <sup>u</sup><sup>b</sup> by t; cf. Theorem 6.9.6. Then the following analog of Proposition 6.11.8 holds.

**Proposition 6.12.8.** The form <sup>t</sup> coincides with <sup>t</sup><sup>∞</sup> in Theorem 6.12.6 or, equivalently, <sup>t</sup> is the form generated by the Friedrichs extension

$$\mathfrak{t}\_{S\_{\mathbb{P}}} = \bar{\mathfrak{t}}$$

The following equivalent characterizations of the Friedrichs extension of Tmin are mentioned for completeness; cf. Proposition 6.10.11 and Corollary 6.11.9.

**Corollary 6.12.9.** Let <sup>v</sup><sup>a</sup> and <sup>u</sup><sup>a</sup> be solutions of (<sup>L</sup> <sup>−</sup> <sup>λ</sup>0)<sup>y</sup> = 0, <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, which are nonprincipal and principal at a, respectively. Assume that the endpoint a is in the limit-circle case and the endpoint b is in the limit-point case. Then f ∈ dom Tmax is in the domain of the Friedrichs extension S<sup>F</sup> of Tmin if and only if one of the following equivalent conditions holds:


Finally, the special case that the endpoint a is regular is briefly discussed.

**Remark 6.12.10.** The considerations in this section simplify if the endpoint a is regular. In that case one can choose the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} in Proposition 6.4.1. The form t and D in (6.12.2) and (6.12.3) reduce to (6.9.19) and (6.9.20) with φ<sup>b</sup> = u<sup>b</sup> in Remark 6.9.9, respectively. The corresponding boundary pair Λ : <sup>D</sup> <sup>→</sup> <sup>C</sup> in (6.11.4) has the form

$$
\Lambda f = f(a), \quad f \in \mathfrak{D}.
$$

## **6.13 Integrable potentials**

In this section the Sturm–Liouville differential expression

$$L = -D^2 + q, \quad D = d/dx, \quad \text{with } q \in L^1(0, \infty) \text{ real},$$

is studied on the interval (0, ∞); note that in this special case r = p = 1. It is clear that the endpoint 0 is regular. In the following lemma it will be shown that <sup>−</sup>D<sup>2</sup> <sup>+</sup> <sup>q</sup> can be seen as a perturbation of the expression <sup>−</sup>D2, in the sense that the fundamental solutions have the same asymptotic behavior as <sup>x</sup> → <sup>e</sup><sup>i</sup> <sup>√</sup>λx and <sup>x</sup> → <sup>e</sup>−<sup>i</sup> <sup>√</sup>λx; cf. Example 6.4.2. In this context it also follows that the endpoint <sup>∞</sup> is in the limit-point case. Throughout this section it will be tacitly assumed that the square root √· is fixed by the requirement that Im <sup>√</sup> λ > 0 for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>\[0, <sup>∞</sup>) and <sup>√</sup> λ ≥ 0 for λ ∈ [0, ∞).

**Lemma 6.13.1.** Assume that <sup>q</sup> <sup>∈</sup> <sup>L</sup>1(0, <sup>∞</sup>) and that <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>). Then there is a fundamental system (e1(·, λ); e2(·, λ)) of the equation (L − λ)y = 0 such that

$$\begin{aligned} e\_1(x,\lambda) &= e^{i\sqrt{\lambda}x}(1+o(1)), \quad x \to \infty, \\ e\_1'(x,\lambda) &= i\sqrt{\lambda}e^{i\sqrt{\lambda}x}(1+o(1)), \quad x \to \infty, \end{aligned}$$

and

$$\begin{aligned} e\_2(x,\lambda) &= e^{-i\sqrt{\lambda}x}(1+o(1)), \quad x \to \infty, \\ e\_2'(x,\lambda) &= -i\sqrt{\lambda}e^{-i\sqrt{\lambda}x}(1+o(1)), \quad x \to \infty. \end{aligned}$$

In particular, <sup>e</sup>1(·, λ) <sup>∈</sup> <sup>L</sup>2(0, <sup>∞</sup>) and <sup>e</sup>2(·, λ) <sup>∈</sup> <sup>L</sup>2(0, <sup>∞</sup>) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>).

Proof. Due to the integrability of q the nonnegative function

$$
\sigma(x) = \int\_x^{\infty} |q(t)| \, dt, \qquad x > 0,
$$

is well defined, nonincreasing, and lim<sup>x</sup>→∞ σ(x) = 0. The proof of the lemma will be given in three steps. In the first two steps each of the above solutions e1(·, λ) and e2(·, λ) is constructed; in the third step the linear independence is shown.

Step 1. Let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ {0}. It will be shown that there is a bounded function <sup>α</sup>(·, λ) such that the integral equation

$$\alpha(x,\lambda) = 1 + \int\_x^{\infty} \frac{e^{2i\sqrt{\lambda}(t-x)} - 1}{2i\sqrt{\lambda}} q(t) \,\alpha(t,\lambda) \,dt, \quad x > 0,\tag{6.13.1}$$

is satisfied. Note that for t ≥ x one has

$$|e^{2i\sqrt{\lambda}(t-x)} - 1| \le 2.$$

Define the sequence of functions <sup>α</sup>n, <sup>n</sup> <sup>∈</sup> <sup>N</sup> ∪ {0}, inductively by

$$\alpha\_0(x,\lambda) = 1 \quad \text{and} \quad \alpha\_{n+1}(x,\lambda) = \int\_x^\infty \frac{e^{2i\sqrt{\lambda}(t-x)} - 1}{2i\sqrt{\lambda}}, \; q(t)\,\alpha\_n(t,\lambda) \,dt$$

for x > 0. Since σ is nonincreasing it is easily seen that

$$|\alpha\_n(x,\lambda)| \le \left(\frac{\sigma(x)}{|\sqrt{\lambda}|}\right)^n \cdot \frac{1}{2}$$

Now choose δ > 0 such that | √ λ| ≥ δ. Then there exists an x<sup>δ</sup> > 0 such that

$$\frac{\sigma(x\_{\delta})}{\delta} < 1,\tag{6.13.2}$$

and note that for all x ≥ x<sup>δ</sup> it follows that

$$\frac{\sigma(x)}{|\sqrt{\lambda}|} \le \frac{\sigma(xs)}{\delta} < 1.$$

For the function α(x, λ) = 1<sup>∞</sup> <sup>n</sup>=0 αn(x, λ) defined for x ≥ x<sup>δ</sup> one has

$$|\alpha(x,\lambda)| \le \sum\_{n=0}^{\infty} |\alpha\_n(x,\lambda)| \le \sum\_{n=0}^{\infty} \left(\frac{\sigma(x)}{|\sqrt{\lambda}|}\right)^n \le \sum\_{n=0}^{\infty} \left(\frac{\sigma(x\_\delta)}{\delta}\right)^n = \frac{1}{1 - \frac{\sigma(x\_\delta)}{\delta}}$$

for x ≥ xδ. Hence, the function α(·, λ) is well defined and bounded for x ≥ xδ. By dominated convergence, it follows for x ≥ x<sup>δ</sup> that

$$\begin{split} \alpha(x,\lambda) &= 1 + \sum\_{n=0}^{\infty} \alpha\_{n+1}(x,\lambda) \\ &= 1 + \sum\_{n=0}^{\infty} \int\_{x}^{\infty} \frac{e^{2i\sqrt{\lambda}(t-x)} - 1}{2i\sqrt{\lambda}} q(t) \, \alpha\_n(t,\lambda) \, dt \\ &= 1 + \int\_{x}^{\infty} \frac{e^{2i\sqrt{\lambda}(t-x)} - 1}{2i\sqrt{\lambda}} q(t) \sum\_{n=0}^{\infty} \alpha\_n(t,\lambda) \, dt \\ &= 1 + \int\_{x}^{\infty} \frac{e^{2i\sqrt{\lambda}(t-x)} - 1}{2i\sqrt{\lambda}} q(t) \, \alpha(t,\lambda) \, dt. \end{split}$$

Hence, the integral equation (6.13.1) is satisfied for all x ≥ xδ. Note also that for x>x<sup>δ</sup>

$$\alpha'(x,\lambda) = -\int\_x^{\infty} e^{2i\sqrt{\lambda}(t-x)} \, q(t) \, \alpha(t,\lambda) \, dt \tag{6.13.3}$$

and

$$\alpha''(x,\lambda) = 2i\sqrt{\lambda} \int\_x^{\infty} e^{2i\sqrt{\lambda}(t-x)} \, q(t) \, \alpha(t,\lambda) \, dt + q(x)\alpha(x,\lambda). \tag{6.13.4}$$

It is clear that |α- (x, λ)| → 0 as x → ∞.

Now consider the function e1(x, λ) = e<sup>i</sup> <sup>√</sup>λxα(x, λ), defined for <sup>x</sup> <sup>≥</sup> <sup>x</sup>δ. It follows from a straightforward computation and (6.13.3)–(6.13.4) that e1(·, λ) satisfies the differential equation (L − λ)e<sup>1</sup> = 0 and the asymptotic properties of e<sup>1</sup> and e- <sup>1</sup> for x → ∞ are a consequence of the asymptotic properties of α in (6.13.1) and α in (6.13.3) for x → ∞. It remains to note that the solution e<sup>1</sup> on the interval (xδ, ∞) can be extended to a solution on (0, ∞).

Step 2. In this step it is assumed that <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>). As in Step 1, choose δ > <sup>0</sup> and x<sup>δ</sup> > 0 such that | √ λ| ≥ δ and (6.13.2) hold. It will be shown that there exists a bounded function β(·, λ) such that the integral equation

$$\beta(x,\lambda) = 1 + \frac{1}{2i\sqrt{\lambda}} \int\_{x\_\delta}^x e^{2i\sqrt{\lambda}(x-t)} \, q(t) \, \beta(t,\lambda) \, dt + \frac{1}{2i\sqrt{\lambda}} \int\_x^\infty q(t) \, \beta(t,\lambda) \, dt$$

is satisfied; in particular, then also the second integral is well defined.

Note first that for x<sup>δ</sup> ≤ t ≤ x one has

$$|e^{2i\sqrt{\lambda}(x-t)}| \le 1.\tag{6.13.5}$$

Define the sequence of functions <sup>β</sup>n, <sup>n</sup> <sup>∈</sup> <sup>N</sup> ∪ {0}, inductively by <sup>β</sup>0(x, λ) = 1 and

$$\beta\_{n+1}(x,\lambda) = \frac{1}{2i\sqrt{\lambda}} \int\_{x\_\delta}^x e^{2i\sqrt{\lambda}(x-t)} \, q(t) \, \beta\_n(t,\lambda) \, dt + \frac{1}{2i\sqrt{\lambda}} \int\_x^\infty q(t) \, \beta\_n(t,\lambda) \, dt$$

for x>xδ. In the same way as in Step 1 it follows that

$$|\beta\_n(x,\lambda)| \le \left(\frac{\sigma(x\_\delta)}{|\sqrt{\lambda}|}\right)^n \le \left(\frac{\sigma(x\_\delta)}{\delta}\right)^n, \quad x \ge x\_\delta,$$

and the function β(x, λ) = 1<sup>∞</sup> <sup>n</sup>=0 βn(x, λ) is well defined for x ≥ xδ, bounded by some constant M<sup>β</sup> ≥ 0, and solves the integral equation. For x ≥ x<sup>δ</sup> define the function e2(x, λ) = e−<sup>i</sup> <sup>√</sup>λxβ(x, λ). Since for x>x<sup>δ</sup> one has

$$\beta'(x,\lambda) = \int\_{x\_\delta}^x e^{2i\sqrt{\lambda}(x-t)} \, q(t) \, \beta(t,\lambda) \, dt$$

and

$$
\beta''(x,\lambda) = 2i\sqrt{\lambda} \int\_{x\_\delta}^x e^{2i\sqrt{\lambda}(x-t)} \, q(t) \, \beta(t,\lambda) \, dt + q(x)\beta(x,\lambda),
$$

it follows that e2(·, λ) satisfies the differential equation (L − λ)e<sup>2</sup> = 0. For the asymptotic properties of e<sup>2</sup> and e- <sup>2</sup> observe first that

$$\begin{split} \int\_{x\_{\delta}}^{x} e^{2i\sqrt{\lambda}(x-t)} \, q(t) \, \beta(t,\lambda) \, dt \\ &= e^{i\sqrt{\lambda}x} \int\_{x\_{\delta}}^{x/2} e^{i\sqrt{\lambda}(x-2t)} \, q(t) \, \beta(t,\lambda) \, dt + \int\_{x/2}^{x} e^{2i\sqrt{\lambda}(x-t)} \, q(t) \, \beta(t,\lambda) \, dt \end{split} \tag{6.13.6}$$

tends to 0 for <sup>x</sup> → ∞. In fact, since <sup>|</sup>e<sup>i</sup> <sup>√</sup>λ(x−2t) | ≤ 1 for x<sup>δ</sup> ≤ t ≤ x/2 the first term on the right-hand side in (6.13.6) satisfies the estimate

$$\left| e^{i\sqrt{\lambda}x} \int\_{x\_{\delta}}^{x/2} e^{i\sqrt{\lambda}(x-2t)} \left. q(t) \, \beta(t, \lambda) \, dt \right| \right| \le \left| e^{-\text{Im}\,\sqrt{\lambda}x} \right| M\_{\beta} \int\_{x\_{\delta}}^{x/2} \left| q(t) \right| \, dt,$$

and hence tends to 0 for <sup>x</sup> → ∞ as Im <sup>√</sup> λ > 0 and <sup>q</sup> <sup>∈</sup> <sup>L</sup>1(0, <sup>∞</sup>). Similarly, the second term on the right-hand side in (6.13.6) tends to 0 for x → ∞ by (6.13.5), <sup>|</sup>β(t, λ)| ≤ <sup>M</sup>β, and <sup>q</sup> <sup>∈</sup> <sup>L</sup>1(0, <sup>∞</sup>). Now the asymptotic properties of <sup>e</sup><sup>2</sup> and <sup>e</sup>- <sup>2</sup> for x → ∞ follow from the asymptotic properties of β and β for x → ∞. Finally, the solution e<sup>2</sup> can be extended to a solution on (0, ∞).

Step 3. Since the Wronskian W(e1(·, λ), e2(·, λ)) is constant, it follows from the asymptotic behavior of e1(·, λ) and e2(·, λ) that

$$W(e\_1(\cdot,\lambda),e\_2(\cdot,\lambda)=-2i\sqrt{\lambda})$$

Hence, <sup>e</sup>1(·, λ) and <sup>e</sup>2(·, λ) form a fundamental system. -

It is a direct consequence of Lemma 6.13.1 that the defect numbers of Tmin are (1, 1). Hence, the endpoint <sup>∞</sup> is in the limit-point case and {C, <sup>Γ</sup>0, <sup>Γ</sup>1}, where Γ<sup>0</sup> and Γ<sup>1</sup> are defined by

$$
\Gamma\_0 f = f(0) \quad \text{and} \quad \Gamma\_1 f = f'(0), \quad f \in \text{dom}\, T\_{\text{max}}\,,\tag{6.13.7}
$$

is a boundary triplet for Tmax ; cf. Proposition 6.4.1. The self-adjoint restriction A<sup>0</sup> of Tmax is given by

$$A\_0 = -D^2 + q, \quad \text{dom}\, A\_0 = \left\{ f \in \text{dom}\, T\_{\text{max}} : f(0) = 0 \right\}.$$

Let u1(·, λ) and u2(·, λ) be the fundamental system corresponding to the usual initial conditions, that is, u1(·, λ) and u2(·, λ) satisfy

$$
\begin{pmatrix} u\_1(0,\lambda) & u\_2(0,\lambda) \\ u\_1'(0,\lambda) & u\_2'(0,\lambda) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}.\tag{6.13.8}
$$

Then the Weyl function belonging to the boundary triplet in (6.13.7) is uniquely defined by the property

$$u\_1(\cdot,\lambda) + M(\lambda)u\_2(\cdot,\lambda) \in L^2(0,\infty), \quad \lambda \in \mathbb{C} \backslash \mathbb{R};\tag{6.13.9}$$

cf. Proposition 6.4.1. In order to determine the corresponding Weyl function one has to compare the fundamental system (u1(·, λ); u2(·, λ)) with the fundamental system (e1(·, λ); e2(·, λ)) from Lemma 6.13.1.

An important ingredient for the following considerations is Gronwall's lemma.

**Lemma 6.13.2.** Let f be a continuous complex function on [c, b).

(i) Assume that α, β <sup>∈</sup> <sup>L</sup><sup>1</sup> loc [c, b) are nonnegative functions and that α is nondecreasing. If

$$|f(t)| \le \alpha(t) + \int\_{c}^{t} \beta(s)|f(s)| \, ds, \quad t \in (c, b), \tag{6.13.10}$$

then f satisfies the inequality

$$|f(t)| \le \alpha(t) \, e^{\int\_c^t \beta(u) \, du}, \quad t \in (c, b). \tag{6.13.11}$$

(ii) Assume that <sup>β</sup> <sup>∈</sup> <sup>L</sup>1(c, b) is a nonnegative function and that <sup>α</sup> <sup>≥</sup> <sup>0</sup> is a constant. If βf <sup>∈</sup> <sup>L</sup>1(c, b) and

$$|f(t)| \le \alpha + \int\_{t}^{b} \beta(s)|f(s)| \, ds, \quad t \in [c, b), \tag{6.13.12}$$

then f satisfies the inequality

$$|f(t)| \le \alpha \, e^{\int\_t^b \beta \left(u\right) \, du}, \quad t \in [c, b). \tag{6.13.13}$$

Proof. (i) It follows from the inequality in (6.13.10) that

$$\begin{aligned} \frac{d}{ds} \left( e^{-\int\_c^s \beta(u) \, du} \int\_c^s \beta(u) |f(u)| \, du \right) \\ &= \left[ |f(s)| - \int\_c^s \beta(u) |f(u)| \, du \right] \beta(s) e^{-\int\_c^s \beta(u) \, du} \\ &\leq \alpha(s) \beta(s) e^{-\int\_c^s \beta(u) \, du} \end{aligned}$$

almost everywhere. Integration of this inequality over the interval [c, t] leads to

$$e^{-\int\_{c}^{t} \beta(u) \, du} \int\_{c}^{t} \beta(u) |f(u)| \, du \le \int\_{c}^{t} \alpha(s) \beta(s) e^{-\int\_{c}^{s} \beta(u) \, du} \, ds$$

or, equivalently,

$$\int\_{c}^{t} \beta(u) |f(u)| \, du \le \int\_{c}^{t} \alpha(s)\beta(s)e^{\int\_{s}^{t} \beta(u) \, du} \, ds.$$

Due to (6.13.10) and the assumption that α is nondecreasing one obtains

$$\begin{aligned} |f(t)| - \alpha(t) &\leq \int\_c^t \beta(u) |f(u)| \, du \\ &\leq \alpha(t) \int\_c^t \beta(s) e^{\int\_s^t \beta(u) \, du} \, ds \\ &= -\alpha(t) \int\_c^t \frac{d}{ds} e^{\int\_s^t \beta(u) \, du} \, ds \\ &= -\alpha(t) \left(1 - e^{\int\_c^t \beta(u) \, du} \right), \end{aligned}$$

which gives (6.13.11).

(ii) It follows from the inequality in (6.13.12) that

$$\begin{aligned} \frac{d}{ds} \left( e^{-\int\_s^b \beta(u) \, du} \int\_s^b \beta(u) |f(u)| \, du \right) \\ &= \left[ \int\_s^b \beta(u) |f(u)| \, du - |f(s)| \right] \beta(s) e^{-\int\_s^b \beta(u) \, du} \\ &\ge -\alpha \beta(s) e^{-\int\_s^b \beta(u) \, du} \end{aligned}$$

almost everywhere. Integration of this inequality over the interval [t, b] leads to

$$-e^{-\int\_{t}^{b} \beta(u) \, du} \int\_{t}^{b} \beta(u) |f(u)| \, du \geq -\alpha \int\_{t}^{b} \beta(s) e^{-\int\_{s}^{b} \beta(u) \, du} \, ds$$

or, equivalently,

$$\begin{aligned} \int\_{t}^{b} \beta(u) |f(u)| \, du &\leq \alpha \int\_{t}^{b} \beta(s) e^{\int\_{t}^{s} \beta(u) \, du} \, ds \\ &= \alpha \int\_{t}^{b} \frac{d}{ds} e^{\int\_{t}^{s} \beta(u) \, du} \, ds \\ &= \alpha \left( e^{\int\_{t}^{b} \beta(u) \, du} - 1 \right). \end{aligned}$$

Due to (6.13.12) one obtains (6.13.13). -

The next lemma on the asymptotic properties of solutions of (L − λ)u = 0 is the first step to determine the Weyl function corresponding to the boundary triplet in (6.13.7).

**Lemma 6.13.3.** Assume that <sup>q</sup> <sup>∈</sup> <sup>L</sup>1(0, <sup>∞</sup>), let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>) and <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>C</sup>. Let u(·, λ) be a solution of (L − λ)u = 0 satisfying

$$(-D^2 + q)u = \lambda u, \quad u(0, \lambda) = c\_1, \quad u'(0, \lambda) = c\_2. \tag{6.13.14}$$

Then

$$|e^{i\sqrt{\lambda}x}u(x,\lambda)| \le \left(|c\_1| + \frac{|c\_2|}{|\sqrt{\lambda}|}\right) \exp\left(\frac{1}{|\sqrt{\lambda}|} \int\_0^x |q(t)| \, dt\right) \tag{6.13.15}$$

and

$$u(x,\lambda) = e^{-i\sqrt{\lambda}x} \left(\frac{c\_1}{2} - \frac{c\_2}{2i\sqrt{\lambda}} - \frac{1}{2i\sqrt{\lambda}} \int\_0^\infty e^{i\sqrt{\lambda}t} q(t) u(t, \lambda) \, dt + o(1)\right),$$

as x → ∞.

Proof. A simple computation shows that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ {0} the unique solution of (6.13.14) is given by

$$u(x,\lambda) = c\_1 \cos\sqrt{\lambda}x + c\_2 \frac{\sin\sqrt{\lambda}x}{\sqrt{\lambda}} + \int\_0^x \frac{\sin\sqrt{\lambda}(x-t)}{\sqrt{\lambda}} q(t)u(t,\lambda) \,dt. \tag{6.13.16}$$

It follows from (6.13.16) that <sup>ϕ</sup>(x, λ) = <sup>e</sup>−(Im <sup>√</sup>λ)xu(x, λ) satisfies

$$|\varphi(x,\lambda)| \le |c\_1| + \frac{|c\_2|}{|\sqrt{\lambda}|} + \frac{1}{|\sqrt{\lambda}|} \int\_0^x |q(t)| \, |\varphi(t, \lambda)| \, dt,$$

and hence Lemma 6.13.2 (i) leads to

$$|\varphi(x,\lambda)| \le \left( |c\_1| + \frac{|c\_2|}{|\sqrt{\lambda}|} \right) \exp\left( \frac{1}{|\sqrt{\lambda}|} \int\_0^x |q(t)| \, dt \right).$$

Since <sup>|</sup>e<sup>i</sup> <sup>√</sup>λxu(x, λ)<sup>|</sup> <sup>=</sup> <sup>|</sup>e−(Im <sup>√</sup>λ)<sup>x</sup>u(x, λ)<sup>|</sup> <sup>=</sup> <sup>|</sup>ϕ(x, λ)|, the estimate (6.13.15) follows.

Furthermore, (6.13.16) yields that

$$\begin{split} u(x,\lambda) &= e^{-i\sqrt{\lambda}x} \left( \frac{c\_1}{2} - \frac{c\_2}{2i\sqrt{\lambda}} \right) + e^{i\sqrt{\lambda}x} \left( \frac{c\_1}{2} + \frac{c\_2}{2i\sqrt{\lambda}} \right) \\ &\quad - \frac{1}{2i\sqrt{\lambda}} \int\_0^x e^{-i\sqrt{\lambda}(x-t)} q(t) u(t,\lambda) \, dt \\ &\quad + \frac{1}{2i\sqrt{\lambda}} \int\_0^x e^{i\sqrt{\lambda}(x-t)} q(t) u(t,\lambda) \, dt. \end{split} \tag{6.13.17}$$

For the second term on the right-hand side of (6.13.17) with <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>) one has

$$e^{i\sqrt{\lambda}x} \left(\frac{c\_1}{2} + \frac{c\_2}{2i\sqrt{\lambda}}\right) = e^{-i\sqrt{\lambda}x} o(1), \quad x \to \infty.$$

To estimate the third term on the right-hand side of (6.13.17) note first that this term is equal to

$$e^{-i\sqrt{\lambda}x} \left[ -\frac{1}{2i\sqrt{\lambda}} \int\_0^x e^{i\sqrt{\lambda}t} q(t) u(t, \lambda) \, dt \right].$$

Next one has

$$-\frac{1}{2i\sqrt{\lambda}}\int\_x^{\infty} e^{i\sqrt{\lambda}t} q(t)u(t,\lambda) \,dt = o(1), \quad x \to \infty.$$

In fact, this holds since <sup>t</sup> → <sup>e</sup><sup>i</sup> <sup>√</sup>λtu(t, λ) is bounded by (6.13.15) and

$$\left| \int\_{x}^{\infty} e^{i\sqrt{\lambda}t} q(t) u(t, \lambda) \, dt \right| \le C \int\_{x}^{\infty} |q(t)| \, dt.$$

Therefore, the third term on the right-hand side of (6.13.17) has the form

$$e^{-i\sqrt{\lambda}x} \left( -\frac{1}{2i\sqrt{\lambda}} \int\_0^\infty e^{i\sqrt{\lambda}t} \, q(t)u(t,\lambda) \, dt + o(1) \right) \dots$$

For the fourth term on the right-hand side of (6.13.17) one uses again that <sup>t</sup> → <sup>e</sup><sup>i</sup> <sup>√</sup>λtu(t, λ) is bounded. Then

$$\left| \frac{1}{2i\sqrt{\lambda}} \int\_0^x e^{i\sqrt{\lambda}(x-t)} q(t) u(t, \lambda) \, dt \right| \le \frac{C}{|\sqrt{\lambda}|} \left( \int\_0^x e^{(\operatorname{Im}\sqrt{\lambda})(2t-x)} |q(t)| \, dt \right),$$

and splitting the interval of integration leads to

$$\begin{split} &\int\_{0}^{x} e^{(\text{Im}\sqrt{\lambda})(2t-x)} |q(t)| \, dt \\ & \qquad = \int\_{0}^{x/2} e^{(\text{Im}\sqrt{\lambda})(2t-x)} |q(t)| \, dt + \int\_{x/2}^{x} e^{(\text{Im}\sqrt{\lambda})(2t-x)} |q(t)| \, dt \\ & \qquad \le \int\_{0}^{x/2} |q(t)| \, dt + e^{(\text{Im}\sqrt{\lambda})x} \int\_{x/2}^{x} |q(t)| \, dt \\ & \qquad = e^{-i\sqrt{\lambda}x} o(1), \quad x \to \infty. \end{split}$$

This completes the proof, as the last assertion in the lemma now follows from (6.13.17). -

The next proposition is a consequence of Lemma 6.13.1 and Lemma 6.13.3. Here the Weyl function M in (6.13.9) is specified.

**Proposition 6.13.4.** Let M be the Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} in (6.13.7). Then there exists a countable (possibly empty) set D ⊂ (−∞, 0) which is bounded from below and may only accumulate at 0, such that <sup>M</sup> is holomorphic on <sup>C</sup> \ ([0, <sup>∞</sup>) <sup>∪</sup> <sup>D</sup>) and

$$M(\lambda) = \frac{i\sqrt{\lambda} - \int\_0^\infty e^{i\sqrt{\lambda}t} q(t) u\_1(t, \lambda) \, dt}{1 + \int\_0^\infty e^{i\sqrt{\lambda}t} q(t) u\_2(t, \lambda) \, dt}, \qquad \lambda \in \mathbb{C} \ ( (0, \infty) \cup \mathcal{D} ). \tag{6.13.18}$$

Proof. In order to determine the Weyl function M corresponding to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} observe first that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>) the fundamental systems (u1(·, λ); u2(·, λ)) and (e1(·, λ); e2(·, λ)) in Lemma 6.13.1 are connected by

$$\begin{aligned} u\_1(\cdot,\lambda) &= A\_{11}(\lambda)e\_1(\cdot,\lambda) + A\_{12}(\lambda)e\_2(\cdot,\lambda), \\ u\_2(\cdot,\lambda) &= A\_{21}(\lambda)e\_1(\cdot,\lambda) + A\_{22}(\lambda)e\_2(\cdot,\lambda), \end{aligned} \tag{6.13.19}$$

where Aij (λ), i, j = 1, 2, are connection coefficients. Since

$$\begin{aligned} u\_1(\cdot,\lambda) + M(\lambda)u\_2(\cdot,\lambda) &= e\_1(\cdot,\lambda) \left( A\_{11}(\lambda) + M(\lambda)A\_{21}(\lambda) \right) \\ &+ e\_2(\cdot,\lambda) \left( A\_{12}(\lambda) + M(\lambda)A\_{22}(\lambda) \right) \end{aligned}$$

and <sup>e</sup>1(·, λ) <sup>∈</sup> <sup>L</sup>2(0, <sup>∞</sup>), <sup>e</sup>2(·, λ) <sup>∈</sup> <sup>L</sup>2(0, <sup>∞</sup>) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>) by Lemma 6.13.1, it follows that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>) the function <sup>u</sup>1(·, λ) + <sup>M</sup>(λ)u2(·, λ) belongs to <sup>L</sup>2(0, <sup>∞</sup>) if and only if <sup>M</sup>(λ) satisfies the equation

$$A\_{12}(\lambda) + M(\lambda)A\_{22}(\lambda) = 0.\tag{6.13.20}$$

Hence, it remains to compute the connection coefficients A12(λ) and A22(λ). It follows from Lemma 6.13.3 with c<sup>1</sup> = 1 and c<sup>2</sup> = 0 that

$$u\_1(x,\lambda) = e^{-i\sqrt{\lambda}x} \left(\frac{1}{2} - \frac{1}{2i\sqrt{\lambda}} \int\_0^\infty e^{i\sqrt{\lambda}t} q(t) u\_1(t,\lambda) \,dt + o(1)\right),$$

and with c<sup>1</sup> = 0 and c<sup>2</sup> = 1 that

$$u\_2(x,\lambda) = e^{-i\sqrt{\lambda}x} \left( -\frac{1}{2i\sqrt{\lambda}} - \frac{1}{2i\sqrt{\lambda}} \int\_0^\infty e^{i\sqrt{\lambda}t} \, q(t) u\_2(t,\lambda) \, dt + o(1) \right).$$

Comparing with (6.13.19) and Lemma 6.13.1, and taking care of the terms involving o(1), this gives

$$A\_{12}(\lambda) = \frac{1}{2} - \frac{1}{2i\sqrt{\lambda}} \int\_0^\infty e^{i\sqrt{\lambda}t} \, q(t)u\_1(t,\lambda) \, dt$$

and, likewise,

$$A\_{22}(\lambda) = -\frac{1}{2i\sqrt{\lambda}} - \frac{1}{2i\sqrt{\lambda}} \int\_0^\infty e^{i\sqrt{\lambda}t} \, q(t)u\_2(t,\lambda) \, dt.$$

Hence, for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>) such that <sup>A</sup>22(λ) = 0 it follows from (6.13.20) that

$$M(\lambda) = -\frac{A\_{12}(\lambda)}{A\_{22}(\lambda)} = \frac{\frac{1}{2} - \frac{1}{2i\sqrt{\lambda}} \int\_0^\infty e^{i\sqrt{\lambda}t} \, q(t)u\_1(t,\lambda) \, dt},$$

$$\frac{1}{2i\sqrt{\lambda}} + \frac{1}{2i\sqrt{\lambda}} \int\_0^\infty e^{i\sqrt{\lambda}t} \, q(t)u\_2(t,\lambda) \, dt},$$

which leads to the expression for M in (6.13.18). Since the Weyl function is holomorphic on <sup>ρ</sup>(A0), it is clear that <sup>A</sup>22(λ) = 0 for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Note also that the functions <sup>A</sup><sup>12</sup> and <sup>A</sup><sup>22</sup> are both holomorphic in <sup>C</sup> \ [0, <sup>∞</sup>), and that the zeros of A<sup>22</sup> in (−∞, 0) may only accumulate at 0 and −∞. However, (6.13.15) shows that

$$|e^{i\sqrt{\lambda}x}u\_2(x,\lambda)| \le \frac{1}{|\sqrt{\lambda}|} \exp\left(\frac{1}{|\sqrt{\lambda}|} \int\_0^x |q(t)| \, dt\right),$$

and hence there exists C<sup>−</sup> ∈ (−∞, 0) such that for all λ ∈ (−∞, C−)

$$\left| \int\_0^\infty e^{i\sqrt{\lambda}t} q(t) u\_2(t, \lambda) \, dt \right| < 1.$$

Therefore, A22(λ) = 0 for all λ ∈ (−∞, C−) and 0 is the only possible accumulation point of the zeros of A<sup>22</sup> in the interval (−∞, 0). This completes the proof of the proposition. -

It will turn out in Corollary 6.13.6 that the set <sup>C</sup> \ ([0, <sup>∞</sup>) <sup>∪</sup> <sup>D</sup>) coincides with the resolvent set of the self-adjoint operator A0, so that the form of the Weyl function M in Proposition 6.13.4 is valid for all λ ∈ ρ(A0).

In the following lemma, which complements Proposition 6.13.4, it will be shown that the Weyl function admits a continuation onto (0, <sup>∞</sup>) from <sup>C</sup><sup>+</sup> with a positive imaginary part.

**Lemma 6.13.5.** Let M be the Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} in (6.13.7). Then the limits limε↓<sup>0</sup> <sup>M</sup>(<sup>λ</sup> <sup>+</sup> iε) and limε↓<sup>0</sup> Im <sup>M</sup>(<sup>λ</sup> <sup>+</sup> iε) exist for all λ > 0 and are given by

$$M(\lambda + i0) = -\frac{a\_{11}(\lambda) + ia\_{12}(\lambda)}{a\_{21}(\lambda) + ia\_{22}(\lambda)}, \qquad \lambda > 0,\tag{6.13.21}$$

and

$$\operatorname{Im} M(\lambda + i0) = \frac{1}{\sqrt{\lambda}} \frac{1}{a\_{21}(\lambda)^2 + a\_{22}(\lambda)^2} > 0, \qquad \lambda > 0,\tag{6.13.22}$$

respectively, where <sup>a</sup>ij : (0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup>, i, j = 1, <sup>2</sup>, are the coefficients in the asymptotic formulas

$$\begin{aligned} u\_1(x,\lambda) &= a\_{11}(\lambda)\cos\sqrt{\lambda}x + a\_{12}(\lambda)\sin\sqrt{\lambda}x + o(1), \quad x \to \infty, \\ u\_2(x,\lambda) &= a\_{21}(\lambda)\cos\sqrt{\lambda}x + a\_{22}(\lambda)\sin\sqrt{\lambda}x + o(1), \quad x \to \infty, \end{aligned} \tag{6.13.23}$$

for λ > 0. The functions aij have the form

$$\begin{split} a\_{11}(\lambda) &= 1 - \frac{1}{\sqrt{\lambda}} \int\_0^\infty \sin\sqrt{\lambda}t \, q(t) u\_1(t, \lambda) \, dt, \\ a\_{12}(\lambda) &= \frac{1}{\sqrt{\lambda}} \int\_0^\infty \cos\sqrt{\lambda}t \, q(t) u\_1(t, \lambda) \, dt, \\ a\_{21}(\lambda) &= -\frac{1}{\sqrt{\lambda}} \int\_0^\infty \sin\sqrt{\lambda}t \, q(t) u\_2(t, \lambda) \, dt, \\ a\_{22}(\lambda) &= \frac{1}{\sqrt{\lambda}} \left( 1 + \int\_0^\infty \cos\sqrt{\lambda}t \, q(t) u\_2(t, \lambda) \, dt \right), \end{split} \tag{6.13.24}$$

and satisfy

$$a\_{11}(\lambda)a\_{22}(\lambda) - a\_{12}(\lambda)a\_{21}(\lambda) = \frac{1}{\sqrt{\lambda}}, \quad \lambda > 0. \tag{6.13.25}$$

Proof. It follows from Lemma 6.13.2 (i) that the solution

$$u(x,\lambda) = c\_1 \cos\sqrt{\lambda}x + c\_2 \frac{\sin\sqrt{\lambda}x}{\sqrt{\lambda}} + \int\_0^x \frac{\sin\sqrt{\lambda}(x-t)}{\sqrt{\lambda}} q(t)u(t,\lambda) \,dt$$

of (6.13.14) is bounded for all λ > 0; cf. the proof of Lemma 6.13.3. Therefore,

$$\begin{split} u(x,\lambda) = c\_1 \cos\sqrt{\lambda}x + c\_2 \frac{\sin\sqrt{\lambda}x}{\sqrt{\lambda}} \\ &+ \int\_0^\infty \frac{\sin\sqrt{\lambda}(x-t)}{\sqrt{\lambda}} q(t)u(t,\lambda) \,dt + o(1) \end{split} \tag{6.13.26}$$

and in the same way one obtains

$$\begin{split} u'(x,\lambda) &= -c\_1\sqrt{\lambda}\sin\sqrt{\lambda}x + c\_2\cos\sqrt{\lambda}x \\ &\quad + \int\_0^\infty \cos\sqrt{\lambda}(x-t)\,q(t)u(t,\lambda)\,dt + o(1). \end{split} \tag{6.13.27}$$

From (6.13.26) and

$$\begin{aligned} \int\_0^\infty \frac{\sin\sqrt{\lambda}(x-t)}{\sqrt{\lambda}} q(t) u(t,\lambda) \, dt &= \frac{\sin\sqrt{\lambda}x}{\sqrt{\lambda}} \int\_0^\infty \cos\sqrt{\lambda}t \, q(t) u(t,\lambda) \, dt \\ &- \frac{\cos\sqrt{\lambda}x}{\sqrt{\lambda}} \int\_0^\infty \sin\sqrt{\lambda}t \, q(t) u(t,\lambda) \, dt \end{aligned}$$

one then derives for λ > 0 the asymptotic formulas (6.13.23), where the coefficient functions aij are as in (6.13.24). Similarly, from (6.13.27) and

$$\begin{aligned} \int\_0^\infty \cos\sqrt{\lambda}(x-t) \, q(t) u(t,\lambda) \, dt &= \sin\sqrt{\lambda}x \int\_0^\infty \sin\sqrt{\lambda}t \, q(t) u(t,\lambda) \, dt \\ &+ \cos\sqrt{\lambda}x \int\_0^\infty \cos\sqrt{\lambda}t \, q(t) u(t,\lambda) \, dt \end{aligned}$$

one obtains for the derivatives

$$\begin{aligned} u\_1'(x,\lambda) &= -a\_{11}(\lambda)\sqrt{\lambda}\sin\sqrt{\lambda}x + a\_{12}(\lambda)\sqrt{\lambda}\cos\sqrt{\lambda}x + o(1), \quad x \to \infty, \\ u\_2'(x,\lambda) &= -a\_{21}(\lambda)\sqrt{\lambda}\sin\sqrt{\lambda}x + a\_{22}(\lambda)\sqrt{\lambda}\cos\sqrt{\lambda}x + o(1), \quad x \to \infty. \end{aligned}$$

In view of the initial values of u1(·, λ) and u2(·, λ), their Wronskian satisfies

$$11 = W(u\_1(\cdot,\lambda), u\_2(\cdot,\lambda)) = \sqrt{\lambda} \left( a\_{11}(\lambda)a\_{22}(\lambda) - a\_{12}(\lambda)a\_{21}(\lambda) \right) + o(1)$$

as x → ∞, and hence (6.13.25) follows.

To complete the proof of (6.13.21) and (6.13.22), it remains to note that for λ > 0 the limits

$$\lim\_{\varepsilon \downarrow 0} \left( i\sqrt{\lambda + i\varepsilon} - \int\_0^\infty e^{i\sqrt{\lambda + i\varepsilon}t} q(t) u\_1(t, \lambda + i\varepsilon) \, dt \right)$$

and

$$\lim\_{\varepsilon \downarrow 0} \left( 1 + \int\_0^\infty e^{i\sqrt{\lambda + i\varepsilon}t} q(t) u\_2(t, \lambda + i\varepsilon) \, dt \right).$$

exist and are given by

$$i\sqrt{\lambda} - \int\_0^\infty e^{i\sqrt{\lambda}t} q(t)u\_1(t,\lambda) \,dt \quad \text{and} \quad 1 + \int\_0^\infty e^{i\sqrt{\lambda}t} q(t)u\_2(t,\lambda) \,dt,$$

respectively, so that the statements follow from the representation of M in Proposition 6.13.4 and the form of the coefficient functions aij . Note that a21(λ) and a22(λ) do not vanish simultaneously for any λ > 0 by (6.13.25). -

In the next corollary the spectral properties of the self-adjoint operator A<sup>0</sup> with Dirichlet boundary condition at 0 are discussed.

**Corollary 6.13.6.** Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in (6.13.7) and consider the self-adjoint operator

$$A\_0 = -D^2 + q, \quad \text{dom}\, A\_0 = \left\{ f \in \text{dom}\, T\_{\text{max}} \, : \, f(0) = 0 \right\}.$$

Then the following holds for the spectrum of A0:


Proof. In order to apply the results from Chapter 3, recall first that, by Proposition 6.4.4, the minimal operator Tmin is simple. It follows from Proposition 6.13.4 that the spectrum of A<sup>0</sup> in (−∞, 0) consists of at most countably many eigenvalues which are bounded from below and may only accumulate at 0. Since the singular endpoint ∞ is in the limit-point case, each eigenvalue has multiplicity one. This shows (iii). From Theorem 3.6.5 and Lemma 6.13.5 one then concludes that

$$
\sigma\_{\rm ac}(A\_0) = \text{clos}\_{\rm ac} \left( \left\{ \lambda \in \mathbb{R} : 0 < \text{Im} \, M(\lambda + i0) < +\infty \right\} \right) = [0, \infty),
$$

i.e., (i) holds. According to Lemma 6.13.5, M(λ + i0) exists for all λ > 0 and hence R<sup>λ</sup> = limε↓<sup>0</sup> iεM(λ+iε) = 0 for all λ > 0. That σp(A0)∩(0, ∞) = ∅ follows from Theorem 3.5.5 and Corollary 3.5.6 (see also Theorem 3.6.1). Finally, that σsc(A0) ∩ (0, ∞) = ∅ follows from Theorem 3.6.8 (see also Corollary 3.6.9) and <sup>0</sup> <sup>&</sup>lt; Im <sup>M</sup>(<sup>λ</sup> <sup>+</sup> <sup>i</sup>0) <sup>&</sup>lt; <sup>+</sup><sup>∞</sup> in Lemma 6.13.5. -

Recall from (6.4.8) that for <sup>τ</sup> <sup>∈</sup> <sup>R</sup> the Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } in (6.4.7) is given by

$$M\_{\tau}(\lambda) = \frac{1 + \tau M(\lambda)}{\tau - M(\lambda)} \tag{6.13.28}$$

and that the self-adjoint restriction of Tmax corresponding to ker Γ<sup>τ</sup> <sup>0</sup> is

$$A\_\tau = -D^2 + q, \qquad \text{dom}\, A\_\tau = \left\{ f \in \text{dom}\, T\_{\text{max}} : f'(0) = \tau f(0) \right\}.$$

For A<sup>τ</sup> one obtains a statement similar to Corollary 6.13.6.

**Proposition 6.13.7.** Let <sup>τ</sup> <sup>∈</sup> <sup>R</sup> and let {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } be the boundary triplet in (6.4.7) with A<sup>τ</sup> as above. Then the following holds for the spectrum of A<sup>τ</sup> :


Proof. It follows from Proposition 6.13.4 that 1 +τM and τ −M are holomorphic on <sup>C</sup> \ [0, <sup>∞</sup>) with the possible exception of a countable set of poles in (−∞, 0) which may only accumulate at 0. Furthermore, since M is a Nevanlinna function, it is nondecreasing in between two consecutive poles in (−∞, 0) and hence τ − M has at most countable many zeros in (−∞, 0) which may only accumulate at 0. Therefore, the function M<sup>τ</sup> in (6.13.28) has at most countably many poles in (−∞, 0) which may only accumulate to 0.

By Lemma 6.13.5, the limit M<sup>τ</sup> (λ+i0) exists for all λ > 0. In fact, it is clear that the limits 1 + τM(λ + i0) and τ − M(λ + i0) exist for all λ > 0. Now assume that for some λ > 0 one has τ = M(λ). Then it follows with the functions aij in Lemma 6.13.5 that

$$\tau(a\_{21}(\lambda) + ia\_{22}(\lambda)) = -a\_{11}(\lambda) - ia\_{12}(\lambda)$$

or, equivalently,

$$a\_{11}(\lambda) + \tau a\_{21}(\lambda) + i \left(a\_{12}(\lambda) + \tau a\_{22}(\lambda)\right) = 0.$$

Hence, a11(λ) = −τ a21(λ) and a12(λ) = −τ a22(λ) and (6.13.25) yields

$$\frac{1}{\sqrt{\lambda}} = -\tau a\_{21}(\lambda)a\_{22}(\lambda) + \tau a\_{22}(\lambda)a\_{21}(\lambda) = 0;$$

a contradiction. It follows that the limit M<sup>τ</sup> (λ + i0) exists for all λ > 0. A simple computation using Im M(λ + i0) > 0 for λ > 0 shows that

$$\operatorname{Im} M\_{\tau}(\lambda + i0) = \frac{(1 + \tau^2) \operatorname{Im} M(\lambda + i0)}{|\tau - M(\lambda + i0)|^2} > 0, \qquad \lambda > 0.$$

From these properties of M<sup>τ</sup> the assertions (i)–(iii) follow in the same way as in the proof of Corollary 6.13.6. -

**Example 6.13.8.** Consider the integrable potential

$$q(x) = -\frac{2A^2}{\cosh^2(Ax+B)},$$

where <sup>A</sup> <sup>≥</sup> 0 and <sup>B</sup> <sup>∈</sup> <sup>R</sup> is arbitrary. Define the smooth bounded function

$$T(x) = -A \tanh(Ax + B),$$

so that q(x)=2T- (x) and

$$T(x)^2 - T'(x) = A^2,$$

which gives T--(x) = q(x)T(x). It is therefore clear that the functions

$$\begin{aligned} \varphi(x,\lambda) &= \cos\sqrt{\lambda}x + T(x)\frac{\sin\sqrt{\lambda}x}{\sqrt{\lambda}},\\ \psi(x,\lambda) &= \sqrt{\lambda}\sin\sqrt{\lambda}x - T(x)\cos\sqrt{\lambda}x,\end{aligned}$$

satisfy the equation −y--+ qy = λy with the initial values

$$\begin{aligned} \varphi(0,\lambda) &= 1, & \varphi'(0,\lambda) &= T(0),\\ \psi(0,\lambda) &= -T(0), & \psi'(0,\lambda) &= \lambda - T'(0). \end{aligned}$$

Hence, the following combinations of ϕ(·, λ) and ψ(·, λ), given by

$$u\_1(x,\lambda) = \frac{1}{\lambda + A^2} \left( (\lambda - T'(0))\varphi(x,\lambda) - T(0)\psi(x,\lambda) \right),$$

and

$$u\_2(x,\lambda) = \frac{1}{\lambda + A^2} \left( T(0)\varphi(x,\lambda) + \psi(x,\lambda) \right),$$

form a fundamental system with the initial conditions in (6.13.8). It is clear that limx→∞ T(x) = −A and hence the coefficients in Lemma 6.13.5 are given by the functions

$$
\begin{pmatrix} a\_{11}(\lambda) & a\_{12}(\lambda) \\ a\_{21}(\lambda) & a\_{22}(\lambda) \end{pmatrix} = \frac{1}{\lambda + A^2} \begin{pmatrix} \lambda - T'(0) - AT(0) & \frac{-\lambda(A + T(0)) + AT'(0)}{\sqrt{\lambda}} \\ & T(0) + A & \frac{\lambda - AT(0)}{\sqrt{\lambda}} \end{pmatrix}.
$$

Note that a11(λ)a22(λ) − a12(λ)a21(λ)=1/ √ λ by Lemma 6.13.5 and this also leads to

$$M(\lambda) = -\frac{\lambda^{3/2} - \sqrt{\lambda}(T'(0) + AT(0)) + i\left(AT'(0) - \lambda(A + T(0))\right)}{\sqrt{\lambda}(T(0) + A) + i(\lambda - AT(0))}, \quad \lambda > 0,\tag{6.13.29}$$

and

$$\operatorname{Im} M(\lambda) = \frac{\sqrt{\lambda} \left(\lambda + A^2\right)}{\lambda + T(0)^2}, \qquad \lambda > 0.$$

Upon writing the solution <sup>x</sup> → <sup>u</sup>1(x, λ) + <sup>M</sup>(λ)u2(x, λ) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>) in the form

$$d\_1(x,\lambda)e^{i\sqrt{\lambda}x} + d\_2(x,\lambda)e^{-i\sqrt{\lambda}x} \tag{6.13.30}$$

one observes that x → d1(x, λ) and x → d2(x, λ) are bounded with limits as x → b. One concludes that limx→<sup>b</sup> d2(x, λ) = 0 since the solution (6.13.30) belongs to <sup>L</sup>2(0, <sup>∞</sup>). A computation of the coefficient <sup>d</sup>2(x, λ) shows that the expression for <sup>M</sup> in (6.13.29) remains valid also for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [0, <sup>∞</sup>). Now one verifies that <sup>M</sup> has a pole at λ < 0 if and only if B < 0 and <sup>λ</sup> <sup>=</sup> <sup>−</sup>T(0)2. Hence, the operator

$$A\_0 = -D^2 + q, \quad \text{dom}\, A\_0 = \left\{ f \in \text{dom}\, T\_{\text{max}} : f(0) = 0 \right\},$$

has one negative eigenvalue <sup>−</sup>T(0)<sup>2</sup> if B < 0 and

$$f(x) = e^{-T(0)x} \left( T(x) - T(0) \right).$$

is a corresponding eigenfunction; in the case B ≥ 0 the operator A<sup>0</sup> has no negative eigenvalues.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 7**

## **Canonical Systems of Differential Equations**

Boundary value problems for regular and singular canonical systems of differential equations are investigated. After a brief introduction to Hilbert spaces of <sup>C</sup>2-valued vector functions which are square-integrable with respect to some 2 <sup>×</sup> <sup>2</sup> matrix-valued function in Section 7.1, the class of canonical systems to be studied here is introduced in Section 7.2. In Section 7.3 the notions of regular, quasiregular, and singular endpoints for canonical systems are explained. The number of square-integrable solutions at an endpoint of the interval is studied in Section 7.4. Together with a monotonicity principle from Chapter 5, this leads to a limitcircle/limit-point classification of singular endpoints in the same way as in Weyl's alternative in Chapter 6. The important concept of definiteness of canonical systems is defined and studied in Section 7.5, and a cut-off technique for solutions is provided. Afterwards, in Section 7.6, a symmetric minimal relation in the appropriate L2-Hilbert space and its adjoint, the maximal relation, are associated with real definite canonical systems. The defect numbers of the minimal relation are specified for regular endpoints and for endpoints in the limit-circle or limit-point case. Boundary triplets and Weyl functions for canonical systems in the limitcircle case are constructed in Section 7.7, while the limit-point case is treated in Section 7.8. The connection between subordinate solutions and properties of the Weyl function, as well as the description of absolutely continuous and singular spectrum are studied in Section 7.9. Finally, in Section 7.10 some special classes of canonical systems of differential equations are discussed, among them weighted Sturm–Liouville equations.

## **7.1 Classes of integrable functions**

The purpose of this section is to introduce classes of vector functions which are locally square-integrable with respect to a measurable nonnegative matrix function and to collect some useful properties of such functions.

Let <sup>ı</sup> <sup>⊂</sup> <sup>R</sup> be an interval, not necessarily bounded, with endpoints a<b, not necessarily belonging to ı. In the following an integral of a vector function or a matrix function over ı or over a subinterval is always understood in the componentwise sense. The linear space L<sup>1</sup> loc (ı) of locally integrable <sup>C</sup>2-valued vector functions consists of all measurable C2-valued vector functions f defined almost everywhere on ı such that for each compact subinterval K ⊂ ı

$$\int\_{K} |f(s)| \, ds < \infty.$$

Here <sup>|</sup>x<sup>|</sup> denotes the Euclidean norm of <sup>x</sup> in <sup>C</sup>2. Note that for <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (ı) and each compact subinterval K ⊂ ı the norm inequality

$$\left| \int\_{K} f(s) \, ds \right| \le \int\_{K} |f(s)| \, ds \tag{7.1.1}$$

holds. A <sup>C</sup>2-valued vector function <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (ı) is said to be integrable at the left endpoint a of the interval ı or integrable at the right endpoint b of the interval ı if for some, and hence for all <sup>c</sup> <sup>∈</sup> <sup>R</sup> with a<c<b

$$\int\_{a}^{c} |f(s)| \, ds < \infty \quad \text{or} \quad \int\_{c}^{b} |f(s)| \, ds < \infty,\tag{7.1.2}$$

respectively. Similarly, a measurable 2 × 2 matrix function Φ is locally integrable on ı if for each compact subinterval K ⊂ ı

$$\int\_{K} |\Phi(s)| \, ds < \infty;$$

here and in the following |A| stands for the operator norm of a 2 × 2 matrix A. In particular,

$$\left| \int\_{K} \Phi(s) \, ds \right| \le \int\_{K} |\Phi(s)| \, ds. \tag{7.1.3}$$

The linear space consisting of all locally integrable 2 ×2 matrix functions on ı will also be denoted by L<sup>1</sup> loc (ı); it will be clear from the context if the values of the functions in L<sup>1</sup> loc (ı) are vectors in <sup>C</sup><sup>2</sup> or 2 <sup>×</sup> 2 matrices. A 2 <sup>×</sup> 2 matrix function <sup>Φ</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (ı) is said to be integrable at the left endpoint a of the interval ı or integrable at the right endpoint b of the interval ı if (7.1.2) holds for some, and hence for all <sup>c</sup> <sup>∈</sup> <sup>R</sup> with a<c<b and <sup>f</sup> replaced by Φ.

Note that all norms on the linear space of 2 × 2 matrices are equivalent and that the operator norm |A| of a 2 × 2 matrix A can be estimated by the Hilbert–Schmidt matrix norm as follows:

$$|A| \le \|A\|\_2 \le \sqrt{2}|A|, \quad \text{where } \|A\|\_2 := \sqrt{\sum\_{i,j=1}^2 |a\_{ij}|^2}. \tag{7.1.4}$$

For the product of 2 × 2 matrices A and B one has

$$|AB| \le |A||B| \quad \text{and} \quad ||AB||\_2 \le ||A||\_2 \, ||B||\_2. \tag{7.1.5}$$

In the case where the 2 × 2 matrix A is nonnegative the trace norm of A can be estimated by the Hilbert–Schmidt matrix norm:

$$\|A\|\_2 \le \text{tr}\,A \le \sqrt{2} \|A\|\_2, \quad \text{where } \text{tr}\,A = a\_{11} + a\_{22},$$

which also gives

$$|A| \le \text{tr}\, A \le 2|A|. \tag{7.1.6}$$

The following definition introduces the semi-inner product space L<sup>2</sup> <sup>Δ</sup>(ı) of C2-valued functions which are square-integrable with respect to Δ; it is assumed that Δ is a 2 × 2 matrix function on ı and that Δ(s) ≥ 0 for almost every s ∈ ı. In order to express the seminorm on L<sup>2</sup> <sup>Δ</sup>(ı) the notation

$$f(s)^{\*}\Delta(s)f(s) = \left(\Delta(s)f(s), f(s)\right)$$

will be useful. Here (·, ·) denotes the standard scalar product in <sup>C</sup><sup>2</sup> and <sup>f</sup> is any C2-function defined on the interval ı.

**Definition 7.1.1.** Let <sup>ı</sup> <sup>⊂</sup> <sup>R</sup> be an interval and let Δ be a measurable 2 <sup>×</sup> 2 matrix function such that Δ(s) <sup>≥</sup> 0 for almost every <sup>s</sup> <sup>∈</sup> <sup>ı</sup>. Then <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) denotes the linear space of all measurable functions f on ı with values in C<sup>2</sup> which are squareintegrable with respect to Δ, that is,

$$\left\|f\right\|\_{\Delta}^{2} = \int\_{\mathfrak{u}} f(s)^{\*} \Delta(s) f(s) \, ds = \int\_{\mathfrak{u}} |\Delta(s)^{\frac{1}{2}} f(s)|^{2} \, ds < \infty. \tag{7.1.7}$$

The semidefinite inner product (·, ·)<sup>Δ</sup> on <sup>L</sup><sup>2</sup> Δ(ı) corresponding to the seminorm · <sup>Δ</sup> in (7.1.7) is given by

$$(f,g)\_{\Delta} = \int\_{\mathfrak{a}} g(s)^\* \Delta(s) f(s) \, ds, \qquad f, g \in \mathcal{L}^2\_{\Delta}(\mathfrak{a}).\tag{7.1.8}$$

**Theorem 7.1.2.** Let <sup>ı</sup> <sup>⊂</sup> <sup>R</sup> be an interval and let <sup>Δ</sup> be a measurable <sup>2</sup> <sup>×</sup> <sup>2</sup> matrix function such that Δ(s) <sup>≥</sup> <sup>0</sup> for almost every <sup>s</sup> <sup>∈</sup> <sup>ı</sup>. Then the linear space <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) equipped with the seminorm (7.1.8) is complete.

Proof. Since the 2×2 matrix function Δ is measurable and nonnegative almost everywhere, there are measurable nonnegative functions e<sup>1</sup> and e2, and a measurable 2 × 2 matrix function U with unitary values, such that

$$
\Delta(s) = U(s)^\* \Xi(s) U(s), \quad \text{where} \quad \Xi(s) = \begin{pmatrix} e\_1(s) & 0 \\ 0 & e\_2(s) \end{pmatrix},
$$

for almost all s ∈ ı. Hence, one has for all measurable functions f with values in C<sup>2</sup> on ı that

$$\int\_{\mathfrak{a}} f(s)^\* \Delta(s) f(s) \, ds = \int\_{\mathfrak{a}} (Uf)(s)^\* \Xi(s) (Uf)(s) \, ds.$$

Written out in components this gives

$$\begin{split} \int\_{\mathfrak{u}} f(s)^{\*} \Delta(s) f(s) \, ds &= \int\_{\mathfrak{u}} |(Uf)\_{1}(s)|^{2} e\_{1}(s) \, ds + \int\_{\mathfrak{u}} |(Uf)\_{2}(s)|^{2} e\_{2}(s) \, ds \\ &= \int\_{\mathfrak{u}} |(Uf)\_{1}(s)|^{2} \, d\mu\_{1}(s) + \int\_{\mathfrak{u}} |(Uf)\_{2}(s)|^{2} \, d\mu\_{2}(s), \end{split} \tag{7.1.9}$$

where the measures μ<sup>1</sup> and μ<sup>2</sup> are absolutely continuous with respect to the Lebesgue measure m and their Radon–Nikod´ym derivatives are given by e<sup>1</sup> and e2, respectively. Therefore, it is now clear that

$$(f \in \mathcal{L}\_{\Delta}^2(\imath) \quad \Leftrightarrow \quad (Uf)\_1 \in \mathcal{L}\_{d\mu\_1}^2(\imath) \quad \text{and} \quad (Uf)\_2 \in \mathcal{L}\_{d\mu\_2}^2(\imath).$$

This shows that the transformation U maps the space L<sup>2</sup> <sup>Δ</sup>(ı) bijectively onto L2 dμ<sup>1</sup> (ı) <sup>×</sup> <sup>L</sup><sup>2</sup> dμ<sup>2</sup> (ı) and from (7.1.9) one sees that the seminorms in L<sup>2</sup> <sup>Δ</sup>(ı) and L2 dμ<sup>1</sup> (ı) <sup>×</sup> <sup>L</sup><sup>2</sup> dμ<sup>2</sup> (ı) are preserved. Therefore, the completeness of L<sup>2</sup> Δ(ı) is a consequence of the completeness of L<sup>2</sup> dμ<sup>1</sup> (ı) and L<sup>2</sup> dμ<sup>2</sup> (ı). -

The space L<sup>2</sup> <sup>Δ</sup>(ı) has the following approximation property.

**Lemma 7.1.3.** Each element of the seminormed space L<sup>2</sup> <sup>Δ</sup>(ı) can be approximated by functions in L<sup>2</sup> <sup>Δ</sup>(ı) which have compact support.

Proof. Let (Kn)n∈<sup>N</sup> be a sequence of nondecreasing compact intervals such that ı = <<sup>∞</sup> <sup>n</sup>=1 <sup>K</sup>n. For <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) put fn(s) = f(s) for s ∈ K<sup>n</sup> and fn(s) = 0 elsewhere. Then <sup>f</sup><sup>n</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı), f<sup>n</sup> has support in Kn, and

$$\|f - f\_n\|\_{\Delta}^2 = \int\_{\mathfrak{u}} (f(s) - f\_n(s))^\* \Delta(s) (f(s) - f\_n(s)) \, ds \to 0$$

as <sup>n</sup> → ∞, by dominated convergence. -

The space L<sup>2</sup> <sup>Δ</sup>,loc (ı) consists of all <sup>C</sup>2-valued functions which are squareintegrable with respect to Δ for each compact subinterval K ⊂ ı, i.e.,

$$\int\_{K} f(s)^{\*} \Delta(s) f(s) \, ds < \infty.$$

$$\mathbb{T}$$

A function <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) is said to be square-integrable with respect to Δ at the left endpoint a of the interval ı or square-integrable with respect to Δ at the right endpoint <sup>b</sup> of the interval <sup>ı</sup> if for some, and hence for all <sup>c</sup> <sup>∈</sup> <sup>R</sup> with a<c<b,

$$\int\_{a}^{c} f(s)^{\*} \Delta(s) f(s) \, ds \, < \infty \quad \text{or} \quad \int\_{c}^{b} f(s)^{\*} \Delta(s) f(s) \, ds \, < \infty,$$

respectively. A function <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) belongs to L<sup>2</sup> <sup>Δ</sup>(ı) if and only if f is squareintegrable with respect to Δ at both endpoints a and b of ı.

Clearly, if Δ is a nonnegative matrix function and f is a vector function, then

$$|\Delta(s)f(s)| = |\Delta(s)^{\frac{1}{2}}\Delta(s)^{\frac{1}{2}}f(s)| \le |\Delta(s)^{\frac{1}{2}}| |\Delta(s)^{\frac{1}{2}}f(s)|.$$

Now the statement in the next lemma is a consequence of the Cauchy–Schwarz inequality and the fact that

$$|\Delta(s)^{\frac{1}{2}}|^2 = |\Delta(s)^{\frac{1}{2}}\Delta(s)^{\frac{1}{2}}| = |\Delta(s)|.$$

**Lemma 7.1.4.** Let Δ be a locally integrable nonnegative 2 × 2 matrix function on <sup>ı</sup> and let <sup>K</sup> <sup>⊂</sup> <sup>ı</sup> be compact. If <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(K), then <sup>Δ</sup><sup>f</sup> <sup>∈</sup> <sup>L</sup>1(K) and

$$\int\_K |\Delta(s)f(s)| \, ds \le \left( \int\_K |\Delta(s)| \, ds \right)^{\frac{1}{2}} \left( \int\_K f(s)^\* \Delta(s) f(s) \, ds \right)^{\frac{1}{2}}.$$

In particular, if <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı), then <sup>Δ</sup><sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (ı) and for all compact K ⊂ ı

$$\int\_{K} |\Delta(s)f(s)| \, ds \le \left( \int\_{K} |\Delta(s)| \, ds \right)^{\frac{1}{2}} \|f\|\_{\Delta}.$$

Let <sup>N</sup> <sup>=</sup> {<sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) : f <sup>Δ</sup> = 0}, so that N is a linear space, and consider the quotient space

$$L^2\_{\Delta}(\iota) := \mathcal{L}^2\_{\Delta}(\iota) / \mathfrak{N}$$

equipped with the scalar product induced by (7.1.8), that is, (f,g)<sup>Δ</sup> = (f, <sup>g</sup>)Δ, where f, <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) are representatives in the equivalence classes f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). From Theorem 7.1.2 it is clear that L<sup>2</sup> <sup>Δ</sup>(ı) is a Hilbert space. When no confusion can arise, the equivalence classes in L<sup>2</sup> <sup>Δ</sup>(ı) will also be referred to as functions that are square-integrable with respect to Δ. Note that the compactly supported functions in L<sup>2</sup> <sup>Δ</sup>(ı) are dense in L<sup>2</sup> <sup>Δ</sup>(ı) by Lemma 7.1.3.

Recall that a C2-valued vector function f on an open interval ı is absolutely continuous if there exists a <sup>C</sup>2-valued vector function <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (ı) such that

$$f(t) - f(s) = \int\_{s}^{t} h(u) \, du\tag{7.1.10}$$

for all s, t ∈ ı. In this case, f is differentiable and f- = h almost everywhere. The space of absolutely continuous C<sup>2</sup>-valued vector functions is denoted by AC(ı). When <sup>a</sup> <sup>∈</sup> <sup>R</sup>, then AC[a, b) stands for the subclass of <sup>f</sup> <sup>∈</sup> AC(a, b) for which <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (a, b) in (7.1.10) additionally belongs to L<sup>1</sup>(a, a- ) for some, and hence for all a<a-< b, in which case

$$f(t) - f(a) = \int\_{a}^{t} h(u) \, du$$

holds for all <sup>t</sup> <sup>∈</sup> (a, b) and thus <sup>f</sup>(a) = limt→<sup>a</sup> <sup>f</sup>(t). When <sup>b</sup> <sup>∈</sup> <sup>R</sup> there is a similar notation AC(a, b] and for f ∈ AC(a, b] one has f(b) = limt→<sup>b</sup> f(t). The notation AC[a, b] is analogous.

## **7.2 Canonical systems of differential equations**

This section offers a brief review of so-called 2×2 canonical systems of differential equations. The existence and uniqueness result for linear systems of differential equations will be discussed and properties of the corresponding fundamental matrices will be derived.

Let <sup>ı</sup> = (a, b) <sup>⊂</sup> <sup>R</sup> be an open, not necessarily bounded, interval and let <sup>H</sup> and Δ be 2 × 2 matrix functions defined almost everywhere on ı such that

$$H, \Delta \in \mathcal{L}\_{\text{loc}}^1(\mathfrak{t}), \quad H(t) = H(t)^\*, \quad \text{and} \quad \Delta(t) \ge 0 \tag{7.2.1}$$

for almost every t ∈ ı. Furthermore, let

$$J = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \tag{7.2.2}$$

and note that <sup>J</sup><sup>∗</sup> <sup>=</sup> <sup>−</sup><sup>J</sup> <sup>=</sup> <sup>J</sup>−1. A canonical system is a system of differential equations of the form

$$Jf'(t) - H(t)f(t) = \lambda \Delta(t)f(t) + \Delta(t)g(t), \quad t \in \iota, \quad \lambda \in \mathbb{C}, \tag{7.2.3}$$

where <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) is a function that is locally square-integrable with respect to Δ with values in <sup>C</sup>2. The condition <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) implies that Δg is locally integrable; cf. Lemma 7.1.4. In the general case of (7.2.3) one speaks of an inhomogeneous system, while if the term involving Δg is absent, that is,

$$Jf'(t) - H(t)f(t) = \lambda \Delta(t)f(t), \quad t \in \iota, \quad \lambda \in \mathbb{C}, \tag{7.2.4}$$

one speaks of the corresponding homogeneous system.

A function f on ı with values in C<sup>2</sup> is said to be a solution of the canonical system (7.2.3) if f belongs to AC(ı) and the equation (7.2.3) holds for almost every t ∈ ı. Observe that if f is a solution of (7.2.3), then f is also a solution of (7.2.3) when <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) is replaced by <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) with Δ(<sup>g</sup> <sup>−</sup> <sup>g</sup>) = 0. Furthermore, if f is a solution of (7.2.3) and h is a solution of (7.2.4), then f +h is a solution of (7.2.3). In fact, the collection of all solutions of the homogeneous system (7.2.4) forms a linear space. The following result on the existence and uniqueness of solutions of initial value problems for inhomogeneous canonical systems will be useful.

**Theorem 7.2.1.** Let <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Fix some <sup>c</sup><sup>0</sup> <sup>∈</sup> <sup>ı</sup> = (a, b) and <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup>. Then the initial value problem

$$Jf'(t) - H(t)f(t) = \lambda \Delta(t)f(t) + \Delta(t)g(t), \qquad f(c\_0) = \gamma,\tag{7.2.5}$$

admits a unique solution f ∈ AC(ı). Moreover, the mapping λ → f(t, λ) is entire for every fixed t ∈ ı.

In order to prove this theorem one replaces the initial value problem (7.2.5) by an equivalent integral equation; recall that the functions H and Δ are locally integrable. The integral equation can be solved, for instance, by successive iterations, which also leads to the statement concerning the mapping λ → f(t, λ) being entire, see, e.g., [754, Theorem 2.1].

In the next lemma a Lagrange identity for solutions of the inhomogeneous canonical system is obtained.

**Lemma 7.2.2.** Assume that λ, μ <sup>∈</sup> <sup>C</sup> and that g, k <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı). Let f,h be solutions of the inhomogeneous equations

$$\begin{aligned} Jf'(t) - H(t)f(t) &= \lambda \Delta(t)f(t) + \Delta(t)g(t), \\ Jh'(t) - H(t)h(t) &= \mu \Delta(t)h(t) + \Delta(t)k(t), \end{aligned}$$

respectively. Then for every compact interval [α, β] ⊂ ı,

$$\begin{aligned} \left(h(\beta)^{\*}Jf(\beta) - h(\alpha)^{\*}Jf(\alpha) = \int\_{\alpha}^{\beta} \left(h(s)^{\*}\Delta(s)g(s) - k(s)^{\*}\Delta(s)f(s)\right)ds \\ &+ (\lambda - \overline{\mu})\int\_{\alpha}^{\beta} h(s)^{\*}\Delta(s)f(s)ds. \end{aligned}$$

Proof. The assumptions that J is skew-adjoint and that H(t) and Δ(t) are selfadjoint almost everywhere on ı lead to the identities

$$\begin{aligned} (h^\* J f)' &= h^\* (J f') - (J h')^\* f \\ &= h^\* (\lambda \Delta f + \Delta g + H f) - (\mu \Delta h + \Delta k + H h)^\* f \\ &= h^\* \Delta g - k^\* \Delta f + (\lambda - \overline{\mu}) h^\* \Delta f, \end{aligned}$$

which are valid almost everywhere on ı. Integration over the interval [α, β] completes the argument. -

Taking λ = μ = 0 in Lemma 7.2.2, one obtains the following corollary. It provides the form of the Lagrange identity that will be studied in detail later in this chapter.

**Corollary 7.2.3.** Assume that g, k <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı). Let f,h be solutions of the inhomogeneous equations

$$\begin{aligned} Jf'(t) - H(t)f(t) &= \Delta(t)g(t), \\ Jh'(t) - H(t)h(t) &= \Delta(t)k(t), \end{aligned} \tag{7.2.6}$$

respectively. Then for every compact interval [α, β] ⊂ ı,

$$h(\beta)^{\*}Jf(\beta) - h(\alpha)^{\*}Jf(\alpha) = \int\_{\alpha}^{\beta} \left( h(s)^{\*}\Delta(s)g(s) - k(s)^{\*}\Delta(s)f(s) \right) ds.$$

There is also a corollary of Lemma 7.2.2 involving solutions of the corresponding homogeneous system. Let Y1(·, λ) and Y2(·, λ) be solutions of (7.2.4) and define the solution matrix

$$Y(\cdot,\lambda) = \begin{pmatrix} Y\_1(\cdot,\lambda) & Y\_2(\cdot,\lambda) \end{pmatrix}, \qquad \lambda \in \mathbb{C}, \tag{7.2.7}$$

which is a 2 <sup>×</sup> 2 matrix function for each <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Then the matrix function <sup>Y</sup> (·, λ) solves the equation (7.2.4) in the sense that it actually solves the matrix version of (7.2.4),

$$JY'(t,\lambda) - H(t)Y(t,\lambda) = \lambda \Delta(t)Y(t,\lambda), \quad t \in \iota.$$

**Corollary 7.2.4.** Let Y (·, λ) be a solution matrix of the homogeneous canonical system (7.2.4). Then for every compact interval [α, β] <sup>⊂</sup> <sup>ı</sup> and all λ, μ <sup>∈</sup> <sup>C</sup>,

$$Y(\beta,\mu)^\* JY(\beta,\lambda) - Y(\alpha,\mu)^\* JY(\alpha,\lambda) = (\lambda - \overline{\mu}) \int\_{\alpha}^{\beta} Y(s,\mu)^\* \Delta(s) Y(s,\lambda) ds.$$

In particular, for all [α, β] <sup>⊂</sup> <sup>ı</sup> and all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>,

$$Y(\beta,\tilde{\lambda})^\* JY(\beta,\lambda) = Y(\alpha,\tilde{\lambda})^\* JY(\alpha,\lambda).$$

It is a consequence of Corollary 7.2.4 that for every solution matrix Y (·, λ) the function

$$t \mapsto Y(t, \lambda)^\* JY(t, \lambda)$$

is constant on ı. Hence, if for some c<sup>0</sup> ∈ ı

$$Y(c\_0, \overline{\lambda})^\* JY(c\_0, \lambda) = J,\tag{7.2.8}$$

then Y (t, λ)∗JY (t, λ) = J for all t ∈ ı. This shows that

$$Y(t,\lambda)^{-1} = -JY(t,\overline{\lambda})^\*J \quad \text{and} \quad Y(t,\overline{\lambda})^{-\*} = -JY(t,\lambda)J, \quad t \in \iota,\tag{7.2.9}$$

and thus it also follows that

$$Y(t,\lambda)JY(t,\overline{\lambda})^\* = J, \quad t \in \mathbb{n}.\tag{7.2.10}$$

Let X be an invertible matrix which does not depend on λ and assume that X∗JX = J. Let Y (·, λ) be the solution matrix which is fixed by the initial condition

$$Y(c\_0, \lambda) = X \tag{7.2.11}$$

for some <sup>c</sup><sup>0</sup> <sup>∈</sup> <sup>ı</sup> and all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Then (7.2.8) is valid and hence (7.2.9) and (7.2.10) are satisfied; the matrix Y (·, λ) is a fundamental matrix, that is, its columns are linearly independent solutions of the homogeneous canonical system (7.2.4) on ı. Frequently the fundamental matrix Y (·, λ) will be fixed by the initial condition

$$Y(c\_0, \lambda) = I \tag{7.2.12}$$

for some c<sup>0</sup> ∈ ı.

According to Theorem 7.2.1, there is a unique solution of the initial value problem (7.2.5). It is possible to express this unique solution in terms of the fundamental matrix Y (·, λ) determined by the initial condition (7.2.12) (and in a similar way with the initial condition (7.2.11)). In fact, for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, any <sup>γ</sup> <sup>∈</sup> <sup>C</sup>2, and any <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı), the unique solution of the inhomogeneous initial value problem

$$Jf' - Hf = \lambda \Delta f + \Delta g, \quad f(c\_0) = \gamma,\tag{7.2.13}$$

is provided by the variation of constant formula:

$$f(t) = Y(t, \lambda)\gamma + Y(t, \lambda) \int\_{c\_0}^{t} Y(s, \lambda)^{-1} J^{-1} \Delta(s) g(s) \, ds. \tag{7.2.14}$$

This can be seen by verifying that the second term on the right-hand side is a solution of the inhomogeneous equation that vanishes at c0. Making use of (7.2.9), one recasts (7.2.14) as

$$f(t) = Y(t, \lambda)\gamma - Y(t, \lambda) \int\_{c\_0}^{t} JY(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds. \tag{7.2.15}$$

In terms of the notation (7.2.7) for the fundamental matrix Y (·, λ) fixed by (7.2.12) the unique solution (7.2.15) of (7.2.13) can be written as

$$\begin{aligned} f(t) &= Y\_1(t, \lambda)\gamma\_1 + Y\_2(t, \lambda)\gamma\_2 \\ &\quad + Y\_1(t, \lambda) \int\_{c\_0}^t Y\_2(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds \\ &\quad - Y\_2(t, \lambda) \int\_{c\_0}^t Y\_1(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds, \end{aligned} \tag{7.2.16}$$

where γ = (γ1, γ2). This form of the solution will be used later.

The general form of the inhomogeneous equation (7.2.3) can be simplified by a transformation of the system. This transformation will be employed in Corollary 7.4.8.

**Lemma 7.2.5.** Let <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, <sup>c</sup><sup>0</sup> <sup>∈</sup> <sup>ı</sup>, and let <sup>U</sup>(·, λ0) be a solution matrix which satisfies

$$JU'(\cdot,\lambda\_0) - HU(\cdot,\lambda\_0) = \lambda\_0 \Delta U(\cdot,\lambda\_0), \quad U(c\_0,\lambda\_0)^\* JU(c\_0,\lambda\_0) = J. \tag{7.2.17}$$

Assume that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) and let f be a solution of the inhomogeneous equation (7.2.3). Define the functions f , <sup>g</sup>, and <sup>Δ</sup> by

$$\tilde{f} = U(\cdot, \lambda\_0)^{-1} f, \quad \tilde{g} = U(\cdot, \lambda\_0)^{-1} g, \quad \tilde{\Delta}(\cdot) = U(\cdot, \lambda\_0)^\* \Delta(\cdot) U(\cdot, \lambda\_0). \tag{7.2.18}$$

Then <sup>Δ</sup> is a locally integrable nonnegative measurable matrix function,

$$
\widetilde{f}^\* \widetilde{\Delta} \widetilde{f} = f^\* \Delta f \quad \text{and} \quad \widetilde{g}^\* \widetilde{\Delta} \widetilde{g} = g^\* \Delta g,\tag{7.2.19}
$$

and, in particular, <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı). Moreover, the function <sup>f</sup> is a solution of the system of differential equations

$$J\tilde{f}' = (\lambda - \lambda\_0)\tilde{\Delta}\tilde{f} + \tilde{\Delta}\tilde{g}.\tag{7.2.20}$$

Conversely, if

$$\widetilde{f}, \widetilde{g} \in \mathcal{L}^{2}\_{\widetilde{\Delta}, \mathrm{loc}}(\mathfrak{\iota}) \quad \text{and} \quad \widetilde{\Delta}(\cdot) = U(\cdot, \lambda\_{0})^{\*} \Delta(\cdot) U(\cdot, \lambda\_{0})$$

satisfy the equation (7.2.20), then f = U(·, λ0)f and <sup>g</sup> <sup>=</sup> <sup>U</sup>(·, λ0)g satisfy the inhomogeneous equation (7.2.3).

Proof. First observe that it is a direct consequence of (7.2.17) that the function U(·, λ0) satisfies

$$U(\cdot,\lambda\_0)^\* J U(\cdot,\lambda\_0) = J;$$

cf. (7.2.8). In particular, this shows that U(t, λ0) is invertible for each t ∈ ı.

Let f , <sup>g</sup>, and Δ be defined by ( 7.2.18). Then it is clear that (7.2.19) holds. Since <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) it also follows that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı). Moreover,

$$Jf' - Hf = \lambda \Delta f + \Delta g\tag{7.2.21}$$

holds by assumption. Substituting f = U(·, λ0)f and <sup>g</sup> <sup>=</sup> <sup>U</sup>(·, λ0)g in (7.2.21), multiplying by U(·, λ0)<sup>∗</sup> from the left, and using (7.2.17) a straightforward calculation leads to (7.2.20). Similarly, one verifies by a direct calculation that the converse statement holds. -

It follows from (7.2.19) that the functions f or <sup>g</sup> in (7.2.18) are squareintegrable with respect to Δ if and only if <sup>f</sup> or <sup>g</sup> are square-integrable with respect to Δ, respectively. The transformation in Lemma 7.2.5 implies that the boundary terms h(x)∗Jf(x) in the Lagrange formula of the original equation in Lemma 7.2.2 can be written in terms of the boundary terms of the corresponding solutions of the transformed equation (7.2.20).

**Corollary 7.2.6.** Assume that λ, μ <sup>∈</sup> <sup>C</sup>, g, k <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) and that f,h are solutions of the inhomogeneous equations

$$\begin{aligned} Jf'(t) - H(t)f(t) &= \lambda \Delta(t)f(t) + \Delta(t)g(t), \\ Jh'(t) - H(t)h(t) &= \mu \Delta(t)h(t) + \Delta(t)k(t). \end{aligned}$$

Assume that <sup>U</sup>(·, λ0) with <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> is a solution matrix which satisfies (7.2.17) and define the functions f <sup>=</sup> <sup>U</sup>(·, λ0)−1<sup>f</sup> and <sup>h</sup> <sup>=</sup> <sup>U</sup>(·, λ0)−1<sup>h</sup> as in (7.2.18). Then for each t ∈ ı

$$h(t)^\* Jf(t) = \dot{h}(t)^\* J\dot{f}(t).$$

Recall that the functions H and Δ were assumed to be 2×2 matrix functions with complex entries. When these functions are real the solutions enjoy a certain symmetry property.

**Definition 7.2.7.** The canonical system Jf- − Hf = λΔf is said to be real if the entries of the 2 × 2 matrix functions H and Δ in (7.2.1) are real functions.

To deal with real canonical systems the notion of conjugate matrices is useful. For a matrix T the conjugate matrix T is the matrix whose entries are the complex conjugates of the entries of T. Let T and S be matrices, not necessarily of the same size, for which the matrix product T S is defined. Then clearly

$$
\overline{TS} = \overline{T}\,\overline{S}.\tag{7.2.22}
$$

**Lemma 7.2.8.** Assume that the canonical system (7.2.4) is real. Let Y (·, λ) be a solution matrix of (7.2.4) such that for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>

$$\overline{Y}(c\_0, \overline{\lambda}) = Y(c\_0, \lambda) \tag{7.2.23}$$

for some point c<sup>0</sup> ∈ ı. Then

$$\overline{Y}(\cdot,\overline{\lambda}) = Y(\cdot,\lambda) \tag{7.2.24}$$

for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. In particular, (7.2.24) holds when <sup>Y</sup> (·, λ) is a fundamental matrix fixed by (7.2.11) or (7.2.12).

Proof. By definition, the solution matrix Y (·, λ) satisfies

$$JY'(\cdot,\lambda) - HY(\cdot,\lambda) = \lambda \Delta Y(\cdot,\lambda). \tag{7.2.25}$$

By assumption, the entries of J, H, and Δ are real; hence taking complex conjugates and using (7.2.22) one sees that

$$J\overline{Y}'(\cdot,\lambda) - H\overline{Y}(\cdot,\lambda) = \overline{\lambda}\Delta\overline{Y}(\cdot,\lambda).$$

Therefore, the matrix function Y (·, λ) satisfies the same equation (7.2.25) as Y (·, λ) and by (7.2.23) these matrix functions satisfy the same initial condition at c0. Now the uniqueness in Theorem 7.2.1 leads to (7.2.24). -

The following observation is an easy consequence of Lemma 7.2.8.

**Corollary 7.2.9.** Let the system Jf- − Hf = λΔf be real and let a fundamental matrix Y (·, λ0) be fixed by the initial condition Y (c, λ0) = I for some a<c<b. Then for every <sup>u</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup>

$$\int\_{\mathfrak{u}} u^\* Y(s, \lambda)^\* \Delta(s) Y(s, \lambda) u \, ds = \int\_{\mathfrak{u}} \overline{u}^\* Y(s, \overline{\lambda})^\* \Delta(s) Y(s, \overline{\lambda}) \overline{u} \, ds.$$

In particular,

<sup>Y</sup> (·, λ)<sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) <sup>⇔</sup> <sup>Y</sup> (·, λ)<sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı).

Proof. Clearly, for any <sup>u</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> and all <sup>s</sup> <sup>∈</sup> <sup>ı</sup> one has

$$u^\* Y(s, \lambda)^\* \Delta(s) Y(s, \lambda) u \ge 0.$$

Therefore,

$$\begin{aligned} u^\* Y(s, \lambda)^\* \Delta(s) Y(s, \lambda) u &= \overline{u^\* Y(s, \lambda)^\* \Delta(s) Y(s, \lambda) u} \\ &= \overline{u}^\* Y(s, \overline{\lambda})^\* \Delta(s) Y(s, \overline{\lambda}) \overline{u}, \end{aligned}$$

which gives the assertion. -

## **7.3 Regular and quasiregular endpoints**

In this section the notions of regular and quasiregular for an endpoint of the interval ı are introduced; this makes it possible to extend Theorem 7.2.1, so that one may solve an initial value problem in an endpoint.

The following definition gives a classification for the endpoints of the canonical system (7.2.3).

**Definition 7.3.1.** An endpoint of the interval ı is said to be a quasiregular endpoint of the canonical system (7.2.3) if the locally integrable functions H and Δ in (7.2.1) are integrable up to that endpoint. A finite quasiregular endpoint is called regular. An endpoint is said to be singular when it is not regular. The canonical system (7.2.3) is called regular if both endpoints are regular; otherwise it is called singular.

The main result in this section implies that if the term <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) in (7.2.3) is square-integrable with respect to Δ at an endpoint which is regular or quasiregular, then every solution of the inhomogeneous equation has a continuous extension to that endpoint, so that it is square-integrable with respect to Δ there.

**Proposition 7.3.2.** Assume that the endpoint a or b of ı = (a, b) is regular or quasiregular and that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) is square-integrable with respect to Δ at a or b, respectively. Then each solution f of (7.2.3) is square-integrable with respect to Δ at a or at b and the limits

$$f(a) := \lim\_{t \to a} f(t) \quad or \quad f(b) := \lim\_{t \to b} f(t) \tag{7.3.1}$$

exist, respectively. Moreover, for each <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> there exists a unique solution <sup>f</sup> of (7.2.3) such that f(a) = γ or f(b) = γ, respectively, and the corresponding function λ → f(t, λ) is entire for every t ∈ ı and t = a or t = b, respectively.

Proof. It suffices to consider the case of the endpoint b. So let b be a regular or quasiregular endpoint, let <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, and fix <sup>c</sup> <sup>∈</sup> (a, b). The proof is split in three separate steps.

Step 1. Any solution f of (7.2.3) with f(c) = η satisfies

$$f(t) = \eta + \int\_{c}^{t} J^{-1} \left(\lambda \Delta(s) + H(s)\right) f(s) \, ds + \int\_{c}^{t} J^{-1} \Delta(s) g(s) \, ds \tag{7.3.2}$$

with t ∈ ı. Recall that, since g is square-integrable with respect to Δ at b, it follows that Δg is integrable on [c, b); cf. Lemma 7.1.4. By definition also λΔ + H is integrable on [c, b). Hence, Gronwall's lemma in Section 6.13 (see Lemma 6.13.2) shows that

$$|f(t)| \le \left( |\eta| + \int\_c^t |\Delta(s)g(s)| \, ds \right) e^{\int\_c^t |\lambda \Delta(s) + H(s)| \, ds}, \quad c \le t < b. \tag{7.3.3}$$

Thus, the solution f is bounded on [c, b): |f(s)| ≤ M, c<s<b. In particular, this shows that

$$\int\_{c}^{b} f(s)^{\*} \Delta(s) f(s) \, ds \le M^{2} \int\_{c}^{b} |\Delta(s)| \, ds < \infty,$$

and hence f is square-integrable with respect to Δ at b. Moreover, it is clear from (7.3.2) that the limit f(b) = limt→<sup>b</sup> f(t) in (7.3.1) exists.

Step 2. In the special case where h is a solution of the homogeneous system (7.2.4) with h(c) = η it follows from (7.3.2) that

$$h(b) = \eta + \int\_{c}^{b} J^{-1} \left(\lambda \Delta(s) + H(s)\right) h(s) \, ds. \tag{7.3.4}$$

The solution h(·, λ) actually depends on λ and according to Theorem 7.2.1 for each c ≤ t<b the function λ → h(t, λ) is entire. It will be shown that also λ → h(b, λ) is entire. In fact, from (7.3.4) it is clear that it suffices to prove that the mapping

$$
\lambda \mapsto \int\_{c}^{b} J^{-1} \left( \lambda \Delta(s) + H(s) \right) h(s, \lambda) \, ds \tag{7.3.5}
$$

is entire. To see this note that (7.3.3) and the equality h(b) = lim<sup>t</sup>→<sup>b</sup> h(t) imply that, for each compact set <sup>K</sup> <sup>⊂</sup> <sup>C</sup>,

$$|h(t, \lambda)| \le C\_K e^{\int\_e^b (|\Delta(s)| + |H(s)|) \, ds}$$

for all c ≤ t ≤ b and for all λ ∈ K. Hence, by dominated convergence, the mapping in (7.3.5) is continuous, and an application of Morera's theorem implies that this mapping is holomorphic. Therefore, λ → h(b, λ) is entire.

Step 3. Let Z(·, λ) be a fundamental matrix of the homogeneous equation (7.2.4) fixed by Z(c, λ) = I. Then, according to Step 1 and Step 2, one has

$$Z(t) = I + \int\_{c}^{t} J^{-1} \left(\lambda \Delta(s) + H(s)\right) Z(s) \, ds$$

and Gronwall's lemma yields the estimate

$$|Z(t,\lambda)| \le e^{\int\_c^t |\lambda \Delta(s) + H(s)| \, ds}, \quad c \le t < b. \tag{7.3.6}$$

Thus, Z(b, λ) = limt→<sup>b</sup> Z(t, λ) exists and it follows from Step 2 that the mapping λ → Z(b, λ) is entire. Moreover, from Z(t, λ)∗JZ(t, λ) = J for c ≤ t<b one concludes by taking the limit t → b that the matrix Z(b, λ) is invertible for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. It is also clear that <sup>Z</sup>(b, λ)∗JZ(b, λ) = <sup>J</sup>. Thus, the function <sup>U</sup>(·, λ) defined by

$$U(t, \lambda) = Z(t, \lambda) Z(b, \lambda)^{-1}$$

is a fundamental matrix of the homogeneous equation which satisfies U(b, λ) = I and <sup>λ</sup> → <sup>U</sup>(t, λ) is entire for <sup>c</sup> <sup>≤</sup> <sup>t</sup> <sup>≤</sup> <sup>b</sup>. Therefore, if <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> is fixed one sees that

$$f(t) = U(t, \lambda)\gamma + U(t, \lambda) \int\_{t}^{b} J U(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds$$

is the unique solution of the inhomogeneous equation with f(b) = γ; cf. (7.2.15). It remains to verify that λ → f(t, λ) is entire for c ≤ t ≤ b. For this it suffices to check that

$$\lambda \mapsto U(t, \lambda) \int\_{t}^{b} J U(s, \overline{\lambda})^{\*} \Delta(s) g(s) \, ds = Z(t, \lambda) \int\_{t}^{b} J Z(s, \overline{\lambda})^{\*} \Delta(s) g(s) \, ds$$

is entire for c ≤ t ≤ b, which can be seen with the help of (7.3.6) in the same way as in Step 2. - **Corollary 7.3.3.** Assume that the endpoints a and b of the canonical system (7.2.3) are regular or quasiregular and that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). Then each solution f of (7.2.3) belongs to L<sup>2</sup> <sup>Δ</sup>(ı) and both limits in (7.3.1) exist.

The next statement follows from Corollary 7.2.3 and Corollary 7.3.3.

**Corollary 7.3.4.** Assume that the endpoints a and b of the canonical system (7.2.3) are regular or quasiregular and that g, k <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). Let f,h be solutions of the inhomogeneous equations (7.2.6). Then

$$h(b)^{\*}Jf(b) - h(a)^{\*}Jf(a) = \int\_{a}^{b} \left( h(s)^{\*}\Delta(s)g(s) - k(s)^{\*}\Delta(s)f(s) \right)ds.$$

Finally, the next statement is a consequence of Proposition 7.3.2 and identity (7.2.10).

**Corollary 7.3.5.** Assume that the endpoint a or b of the canonical system (7.2.3) is regular or quasiregular and let Y (·, λ) be a fundamental matrix of the canonical system (7.2.3). Then Y (·, λ)φ is square-integrable with respect to Δ at a or b for every <sup>φ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> and <sup>Y</sup> (·, λ) admits a unique continuous extension to <sup>a</sup> or <sup>b</sup> such that Y (a, λ) or Y (b, λ) is invertible, respectively. In particular, the point c<sup>0</sup> in (7.2.12) can be chosen to be a or b, respectively.

## **7.4 Square-integrability of solutions of real canonical systems**

Let ı = (a, b) be an open interval and consider on this interval the homogeneous system Jf- <sup>−</sup> Hf <sup>=</sup> <sup>λ</sup>Δf. Recall that a solution <sup>f</sup>, depending on <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, is called square-integrable with respect to Δ at a or b if for some c ∈ ı

$$\int\_{a}^{c} f(s)^{\*} \Delta(s) f(s) \, ds \, < \infty \quad \text{or} \quad \int\_{c}^{b} f(s)^{\*} \Delta(s) f(s) \, ds \, < \infty,$$

respectively. In this section the existence of such solutions is studied for real canonical systems; cf. Definition 7.2.7. The first main result asserts that if there are two linearly independent solutions which are square-integrable with respect to Δ at an endpoint for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, then for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> all solutions are square-integrable with respect to Δ at that endpoint. The second main result states that for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> there is at least one solution that is square-integrable with respect to Δ at an endpoint. A combination of these two results gives a general description of the existence of the solutions that are square-integrable with respect to Δ at an endpoint and leads to the limit-point and limit-circle classification.

In the rest of this section it will be assumed that the system (7.2.3) is real and the symmetry result in Corollary 7.2.9 will be used throughout.

**Theorem 7.4.1.** Assume that for <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> the equation Jf- − Hf = λ0Δf has two linearly independent solutions which are square-integrable with respect to Δ at a or <sup>b</sup>. Then for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> each solution of Jf- − Hf = λΔf is square-integrable with respect to Δ at a or b, respectively.

Proof. It is sufficient to show the result for one endpoint, say b. Assume without loss of generality that the endpoint a of the canonical system (7.2.3) is regular. Fix a fundamental solution Y (·, λ0) by the initial condition Y (a, λ0) = I. The columns <sup>Y</sup>1(·, λ0) and <sup>Y</sup>2(·, λ0) of <sup>Y</sup> (·, λ0) belong to <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) by assumption. As the system is assumed to be real, one has

$$\int\_{a}^{b} |\Delta(s)^{\frac{1}{2}} Y\_i(s, \overline{\lambda}\_0)|^2 \, ds = \int\_{a}^{b} |\Delta(s)^{\frac{1}{2}} Y\_i(s, \lambda\_0)|^2 \, ds, \quad i = 1, 2; \tag{7.4.1}$$

cf. Corollary 7.2.9.

Let <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and let <sup>f</sup>(·, λ) be any solution of Jf- − Hf = λΔf. It will be shown that f(·, λ) is square-integrable with respect to Δ at b. Since the function f(·, λ) satisfies

$$Jf'(\cdot,\lambda) - Hf(\cdot,\lambda) = \lambda\_0 \Delta f(\cdot,\lambda) + (\lambda - \lambda\_0)\Delta f(\cdot,\lambda),$$

it follows from (7.2.16) (with g = (λ − λ0)f(·, λ)) that f(·, λ) can be written as

$$\begin{split} f(t,\lambda) &= Y\_1(t,\lambda\_0)\alpha\_1 + Y\_2(t,\lambda\_0)\alpha\_2 \\ &\quad + (\lambda - \lambda\_0) \left[ Y\_1(t,\lambda\_0)y\_2(t,\lambda) - Y\_2(t,\lambda\_0)y\_1(t,\lambda) \right], \end{split} \tag{7.4.2}$$

where f(a, λ)=(α1, α2) and yi(·, λ) is defined by

$$y\_i(t, \lambda) = \int\_a^t Y\_i(s, \overline{\lambda}\_0)^\* \Delta(s) f(s, \lambda) \, ds, \quad i = 1, 2, 3$$

respectively. By applying the Cauchy–Schwarz inequality in the definition of yi(t, λ) and using (7.4.1) one obtains for i = 1, 2,

$$\begin{split} |y\_i(t,\lambda)| &\leq \sqrt{\int\_a^t |\Delta(s)^{\frac{1}{2}} Y\_i(s, \overline{\lambda}\_0)|^2 \, ds} \sqrt{\int\_a^t |\Delta(s)^{\frac{1}{2}} f(s, \lambda)|^2 \, ds} \\ &\leq \sqrt{\int\_a^b |\Delta(s)^{\frac{1}{2}} Y\_i(s, \overline{\lambda}\_0)|^2 \, ds} \sqrt{\int\_a^t |\Delta(s)^{\frac{1}{2}} f(s, \lambda)|^2 \, ds} \\ &= \sqrt{\int\_a^b |\Delta(s)^{\frac{1}{2}} Y\_i(s, \lambda\_0)|^2 \, ds} \sqrt{\int\_a^t |\Delta(s)^{\frac{1}{2}} f(s, \lambda)|^2 \, ds}. \end{split}$$

Introduce the number α ≥ 0 and the nonnegative function ϕ by

$$\alpha = \max\left\{ |\alpha\_1|, |\alpha\_2| \right\}, \quad \varphi(t) = \max\left\{ |\Delta(t)^{\frac{1}{2}} Y\_1(t, \lambda\_0)|, |\Delta(t)^{\frac{1}{2}} Y\_2(t, \lambda\_0)| \right\},$$

so that <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup>(a, b). Multiply both sides in the identity in (7.4.2) from the left by Δ(t) 1 <sup>2</sup> , then

$$\begin{aligned} &|\Delta(t)^{\frac{1}{2}}f(t,\lambda)| \\ &\leq 2\alpha\varphi(t) + 2|\lambda - \lambda\_0|\varphi(t)\sqrt{\int\_a^b \varphi(s)^2 \,ds} \sqrt{\int\_a^t |\Delta(s)^{\frac{1}{2}}f(s,\lambda)|^2 \,ds}. \end{aligned}$$

Therefore, one obtains that

$$|\Delta(t)^{\frac{1}{2}}f(t,\lambda)|^2 \le \varphi(t)^2 \left( A + B \int\_a^t |\Delta(s)^{\frac{1}{2}} f(s,\lambda)|^2 \, ds \right),\tag{7.4.3}$$

where

$$A = 8\alpha^2, \quad B = 8|\lambda - \lambda\_0|^2 \int\_a^b \varphi(s)^2 \, ds.$$

It follows from (7.4.3) by means of Lemma 6.1.4 with u(t) = |Δ(t) 1 <sup>2</sup> f(t, λ)|, ϕ as above, and r = 1, that the function f(·, λ) is square-integrable with respect to Δ at b. -

Next it will be shown that for each endpoint and any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> there is at least one solution of the homogeneous canonical system (7.2.4) which is squareintegrable with respect to Δ at that endpoint. The proof of this fact is based on the monotonicity principle in Section 5.2; cf. Corollary 5.2.14. To apply this result, let Y (·, λ) be a fundamental matrix of the canonical system (7.2.3) fixed as in (7.2.12) and consider the 2 × 2 matrix function D(·, λ) on ı defined by

$$D(t,\lambda) = Y(t,\lambda)^\*(-iJ)Y(t,\lambda), \quad t \in \iota, \quad \lambda \in \mathbb{C}.\tag{7.4.4}$$

Observe that the function t → D(t, λ), t ∈ ı, is absolutely continuous for every <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and that the matrices <sup>D</sup>(t, λ) are self-adjoint and invertible for all <sup>t</sup> <sup>∈</sup> <sup>ı</sup> and <sup>λ</sup> <sup>∈</sup> <sup>C</sup>.

According to the following theorem, the matrix function in (7.4.4) admits selfadjoint limits at a and b, which may be either self-adjoint matrices or self-adjoint relations with a one-dimensional domain and a one-dimensional multivalued part. Furthermore, the dimensions of the domains of the limit relations are directly connected with the number of linearly independent solutions of the homogeneous canonical system (7.2.4) that are square-integrable with respect to Δ.

**Theorem 7.4.2.** For <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> or <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> the <sup>2</sup> <sup>×</sup> <sup>2</sup> matrix function <sup>t</sup> → <sup>D</sup>(t, λ) is nondecreasing or nonincreasing on ı, respectively. There exist self-adjoint relations D(b, λ) and D(a, λ) in C<sup>2</sup> such that

$$D(t, \lambda) \to D(a, \lambda) \quad \text{and} \quad D(t, \lambda) \to D(b, \lambda)$$

in the (strong ) resolvent sense when t → a and t → b, respectively, and

1 ≤ dim dom D(a, λ) ≤ 2 and 1 ≤ dim dom D(b, λ) ≤ 2. Furthermore, φ ∈ dom D(a, λ) or φ ∈ dom D(b, λ) if and only if Y (·, λ)φ is a solution of (7.2.4) that is square-integrable with respect to Δ at a or b, respectively.

Proof. It follows from Corollary 7.2.4 that

$$D(\beta,\lambda) - D(\alpha,\lambda) = 2\operatorname{Im}\lambda \int\_{\alpha}^{\beta} Y(s,\lambda)^\* \Delta(s) Y(s,\lambda) \, ds, \quad \lambda \in \mathbb{C},\tag{7.4.5}$$

holds for any compact interval [α, β] ⊂ ı. Hence, the matrix function D(·, λ) is nondecreasing for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> and nonincreasing for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>−. It follows from Corollary 5.2.14 that there exist self-adjoint relations D(a, λ) and D(b, λ) such that

$$\lim\_{t \to a} \left( D(t, \lambda) - \mu \right)^{-1} = \left( D(a, \lambda) - \mu \right)^{-1}, \qquad \mu \in \mathbb{C} \backslash \mathbb{R},$$

and

$$\lim\_{t \to b} \left( D(t, \lambda) - \mu \right)^{-1} = \left( D(b, \lambda) - \mu \right)^{-1}, \qquad \mu \in \mathbb{C} \backslash \mathbb{R}.$$

Next it will be shown that the dimension of the domains of the self-adjoint relations D(a, λ) and D(b, λ) is at least one. For this it is sufficient to prove that there exists at least one (finite) eigenvalue.

Note first that, by (7.4.4) and (7.2.12),

$$D(c\_0, \lambda) = Y(c\_0, \lambda)^\*( -iJ ) Y(c\_0, \lambda) = -iJ'$$

and hence the eigenvalues of D(c0, λ) are ν−(c0) = −1 and ν+(c0) = 1. As the function D(·, λ) is continuous on ı, the same holds true for its eigenvalues ν−(·) and ν+(·). Since the matrices D(t, λ) are self-adjoint and invertible for all t ∈ ı it follows that ν−(t) < 0 and ν+(t) > 0 for all t ∈ ı. Recall that

$$\nu\_{-}(t) = \inf\_{|x|=1} \left( D(t,\lambda)x, x \right) \quad \text{and} \quad \nu\_{+}(t) = \sup\_{|x|=1} \left( D(t,\lambda)x, x \right),$$

and since D(t1, λ) ≤ D(t2, λ), t<sup>1</sup> ≤ t2, it follows that

$$
\nu\_{-}(t\_1) \le \nu\_{-}(t\_2) \quad \text{and} \quad \nu\_{+}(t\_1) \le \nu\_{+}(t\_2), \quad t\_1 \le t\_2.
$$

Therefore, it is clear that the limits of ν−(t) and ν+(t) exist and that

$$\nu\_{-}(b) = \lim\_{t \to b} \nu\_{-}(t) \le 0 \quad \text{and} \quad 0 < \nu\_{+}(b) = \lim\_{t \to b} \nu\_{+}(t) \le \infty.$$

In order to see the connection of these limits with the self-adjoint relation D(b, λ) observe that for <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>

$$\frac{1}{\nu\_{-}(t) - \mu} \quad \text{and} \quad \frac{1}{\nu\_{+}(t) - \mu}$$

are the eigenvalues of the matrix (D(t, λ) <sup>−</sup> <sup>μ</sup>)−1. Therefore, again by continuity, one sees that

$$\frac{1}{\nu\_{-}(b) - \mu} \quad \text{and} \quad \frac{1}{\nu\_{+}(b) - \mu}$$

are the eigenvalues of the matrix (D(b, λ) <sup>−</sup> <sup>μ</sup>)−<sup>1</sup>. Hence, <sup>ν</sup>−(b) is a nonpositive eigenvalue of the self-adjoint relation D(b, λ), which implies dim (dom D(b, λ)) ≥ 1. More precisely, if ν+(b) < ∞, then ν+(b) is a positive eigenvalue of D(b, λ), in which case dim (dom D(b, λ)) = 2, while if ν+(b) = ∞, then D(b, λ) has a onedimensional multivalued part and dim (dom D(b, λ)) = 1. Similar observations may be made for the self-adjoint relation D(a, λ). In particular, it follows that dim (dom D(a, λ)) ≥ 1.

Finally, it will be shown that φ ∈ dom D(b, λ) if and only if the solution Y (·, λ)φ of (7.2.4) is square-integrable with respect to Δ at b; the argument for the left endpoint <sup>a</sup> is the same. Suppose that <sup>λ</sup> <sup>∈</sup> <sup>C</sup>+, so that <sup>D</sup>(·, λ) is nondecreasing on ı. In this case it follows from Corollary 5.2.13 and Corollary 5.2.14 that

$$\text{dom}\,D(b,\lambda) = \left\{ \phi \in \mathbb{C}^2 : \lim\_{t \to b} \phi^\* D(t,\lambda)\phi < \infty \right\}.$$

and hence (7.4.5) implies that φ ∈ dom D(b, λ) if and only if

$$\int\_{\alpha}^{b} \phi^\* Y(s, \lambda)^\* \Delta(s) Y(s, \lambda) \phi \, ds < \infty,$$

that is, the solution Y (·, λ)φ is square-integrable with respect to Δ at b. The case where <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> is dealt with in a similar way. -

A combination of Theorems 7.4.1 and 7.4.2 leads to the following observation.

**Corollary 7.4.3.** If for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the equation Jf- − Hf = λ0Δf has, up to scalar multiples, only one nontrivial solution which is square-integrable with respect to <sup>Δ</sup> at <sup>a</sup> or <sup>b</sup>, then for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the equation Jf- − Hf = λΔf has, up to scalar multiples, precisely one nontrivial solution which is square-integrable with respect to Δ at a or b, respectively.

Proof. It is sufficient to consider the endpoint <sup>b</sup>. Assume that for some <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the equation Jf- − Hf = λ0Δf has, up to scalar multiples, only one nontrivial solution that is square-integrable with respect to Δ at b, and suppose that for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> with <sup>λ</sup> <sup>=</sup> <sup>λ</sup><sup>0</sup> the equation Jf- − Hf = λΔf does not have, up to scalar multiples, only one nontrivial solution which is square-integrable with respect to Δ at b. Since

$$1 \le \dim\left(\text{dom}\,D(b,\lambda)\right) \le 2$$

by Theorem 7.4.2 there exist two linearly independent solutions of Jf- −Hf = λΔf which are square-integrable with respect to Δ at b. But then Theorem 7.4.1 implies that there also exist two linearly independent solutions of Jf- − Hf = λ0Δf that are square-integrable with respect to Δ at b; a contradiction. -

Theorem 7.4.1 and Corollary 7.4.3 yield the limit-point and limit-circle classification for real canonical systems in the next definition and corollary. The terminology is inspired by the terminology for Sturm–Liouville equations in Section 6.1. **Definition 7.4.4.** For a real canonical system the endpoint a or b of the interval ı is said to be in the limit-circle case if for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> there exist two linearly independent solutions of Jf- − Hf = λΔf that are square-integrable with respect to Δ at a or b, respectively. The endpoint a or b of the interval ı is said to be in the limit-point case if for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> there exists, up to scalar multiples, only one nontrivial solution of Jf- − Hf = λΔf that is square-integrable with respect to Δ at a or b, respectively.

Note that, by Theorem 7.4.2, at any endpoint of the interval ı there is at least one nontrivial solution that is square-integrable with respect to Δ and there are at most two linearly independent solutions that are square-integrable with respect to Δ. This leads to Weyl's alternative for canonical systems.

**Corollary 7.4.5.** For a real canonical system each of the endpoints of the interval is either in the limit-circle case or in the limit-point case.

For completeness also the special case of regular and quasiregular endpoints is briefly discussed. The next corollary is an immediate consequence of Corollary 7.3.5.

**Corollary 7.4.6.** A regular or quasiregular endpoint of a real canonical system is in the limit-circle case.

A simple but useful characterization of the limit-point case is given in the following corollary. It is stated for the endpoint b, but clearly there is a similar statement for the endpoint a.

**Corollary 7.4.7.** Let the canonical system be real and assume that the endpoint a is regular or quasiregular. Then the following statements hold:


Proof. (i) If there exists <sup>λ</sup> <sup>∈</sup> <sup>R</sup> for which the homogeneous equation has two linearly independent solutions that are square-integrable with respect to Δ at b, then by Theorem 7.4.1, for each <sup>λ</sup> <sup>∈</sup> <sup>C</sup> all nontrivial solutions are square-integrable with respect to Δ at b. Hence, b is in the limit-circle case; a contradiction.

(ii) If <sup>b</sup> is in the limit-circle case, then for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, and hence for <sup>λ</sup> <sup>∈</sup> <sup>R</sup>, the homogeneous equation has two linearly independent solutions which are squareintegrable with respect to Δ at b. This implies (ii). -

If, for instance, the endpoint b is regular or quasiregular, then any solution of (7.2.3) with g square-integrable with respect to Δ at b has a limit at b by Proposition 7.3.2 and b is in the limit-circle case by Corollary 7.4.6. However, if b is in the limit-circle case, then the solutions of (7.2.3) are square-integrable with respect to Δ at b, but they do not necessarily have a limit at b. It will be shown in this case that there exists a natural transformation which turns the system into one where b is quasiregular; cf. Lemma 7.2.5.

**Corollary 7.4.8.** Assume that a is regular and that b is in the limit-circle case. Let <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) and let f(·, λ) be a solution of

$$Jf' - Hf = \lambda \Delta f + \Delta g.$$

Let <sup>U</sup>(·, λ0), <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup>, be a matrix function as in (7.2.17). Then the limit

$$\tilde{f}(b) = \lim\_{t \to b} U(t, \lambda\_0)^{-1} f(t) \tag{7.4.6}$$

exists in <sup>C</sup>2. Moreover, for each <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> there exists a unique solution <sup>f</sup>(·, λ) of (7.2.3) such that f (b) = <sup>γ</sup> and the corresponding function

$$\lambda \mapsto \lim\_{t \to b} U(t, \lambda\_0)^{-1} f(t, \lambda)$$

is entire.

Proof. Let <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) and let f be a solution of (7.2.3). Since b is in the limitcircle case, there exists for <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> and <sup>c</sup><sup>0</sup> <sup>∈</sup> [a, b) a matrix function <sup>U</sup>(·, λ0) satisfying (7.2.17) that is square-integrable with respect to Δ at b. Thus, the function Δ defined in ( 7.2.18) is integrable at <sup>b</sup>, which means that the endpoint <sup>b</sup> for the system in (7.2.20) is quasiregular. Since g is square-integrable with respect to Δ at <sup>b</sup>, the function <sup>g</sup> in Lemma 7.2.5 is square-integrable with respect to <sup>Δ</sup> at b. Therefore, the assertion is clear from Proposition 7.3.2 as f is a solution of (7.2.20). -

Let the endpoint <sup>a</sup> be regular or quasiregular. Let g, k <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) and let f,h be solutions of the inhomogeneous equations (7.2.6) such that f,h <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). Then for a ≤ t<b one has

$$h(h(t)^\* Jf(t) - h(a)^\* Jf(a) = \int\_a^t \left( h(s)^\* \Delta(s) g(s) - k(s)^\* \Delta(s) f(s) \right) ds \tag{7.4.7}$$

by the Lagrange identity in Corollary 7.2.3. It follows from (7.4.7) that the limit

$$\lim\_{t \to b} h(t)^\* Jf(t)$$

exists. Of course, when b is regular or quasiregular, then the individual limits limt→<sup>b</sup> f(t) and limt→<sup>b</sup> h(t) exist by Proposition 7.3.2, see also Corollary 7.3.4. In general the existence of the individual limits limt→<sup>b</sup> f(t) and limt→<sup>b</sup> h(t) is not guaranteed. However, in the case where b is in the limit-circle case but not quasiregular the next corollary suggests to employ the limits in (7.4.6).

**Corollary 7.4.9.** Assume that the endpoint a is regular and that b is in the limitcircle case. Let g, k <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) and let f,h be solutions of the inhomogeneous equations (7.2.6) such that f,h <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). Then

$$\lim\_{t \to b} h(t)^\* Jf(t) = \widetilde{h}(b)^\* J\widetilde{f}(b),\tag{7.4.8}$$

where f (b) and h(b) are as in (7.4.6). Moreover,

$$
\tilde{h}(b)^\* J \tilde{f}(b) - h(a)^\* J f(a) = \int\_a^b \left( h(s)^\* \Delta(s) g(s) - k(s)^\* \Delta(s) f(s) \right) ds. \tag{7.4.9}
$$

Proof. It follows by taking limits in (7.4.7) that

$$\lim\_{t \to b} h(t)^\* Jf(t) - h(a)^\* Jf(a) = \int\_a^b \left( h(s)^\* \Delta(s) g(s) - k(s)^\* \Delta(s) f(s) \right) ds.$$

Now apply Corollary 7.2.6 and Corollary 7.4.8. Take the limit t → b and (7.4.8) and (7.4.9) follow. -

## **7.5 Definite canonical systems**

The general class of canonical differential equations as in (7.2.3) will now be narrowed down by imposing a definiteness condition; see Definition 7.5.5. This condition will be assumed in the rest of this chapter. In this section various equivalent formulations of the definiteness condition will be presented. Moreover, it will be shown that the solution of a definite canonical system (7.2.3) can be cut off near an endpoint of the interval ı, in the sense that the solution is modified in such a way that it becomes trivial in a neighborhood of that endpoint.

It will be convenient to begin the discussion of definiteness of the canonical system (7.2.3) with the notion of definiteness when the system is restricted to an arbitrary subinterval j ⊂ ı.

**Definition 7.5.1.** Let j ⊂ ı be a nonempty interval. The canonical system (7.2.3) is said to be definite on j if for each solution f of Jf-− Hf = 0 on j one has

$$
\Delta(t)f(t) = 0, \ t \in \mathcal{J} \quad \Rightarrow \quad f(t) = 0, \ t \in \mathcal{J}.
$$

Observe that if a solution f of the canonical system (7.2.3) vanishes on a nonempty subinterval j ⊂ ı, then f(t) = 0 for t ∈ ı; cf. Theorem 7.2.1. Hence, it is clear that if the canonical system (7.2.3) is definite on j, then it is also definite on every interval <sup>j</sup> with the property that <sup>j</sup> <sup>⊂</sup> <sup>j</sup> <sup>⊂</sup> <sup>ı</sup>. Also observe that with the subinterval j ⊂ ı and a continuous function f one has

$$
\Delta(t)f(t) = 0, \ t \in \mathcal{J} \quad \Leftrightarrow \quad \int\_{\mathcal{J}} f(s)^{\*} \Delta(s) f(s) \, ds = 0. \tag{7.5.1}
$$

Clearly, if Δ(t) has full rank for almost all t ∈ j, then the canonical system is automatically definite on j.

**Lemma 7.5.2.** Let j ⊂ ı be a nonempty interval. The canonical system (7.2.3) is definite on the interval <sup>j</sup> <sup>⊂</sup> <sup>ı</sup> if and only if for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and for each solution <sup>f</sup> of Jf-− Hf = λΔf on j one has

$$
\Delta(t)f(t) = 0, \ t \in \mathcal{J} \quad \Rightarrow \quad f(t) = 0, \ t \in \mathcal{J}.
$$

Proof. Assume that the canonical system is definite on <sup>j</sup>. Choose <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and let f be a solution of Jf- − Hf = λΔf on j with Δ(t)f(t) = 0 for almost all t ∈ j. Thus, f is a solution of Jf- − Hf = 0 with Δ(t)f(t) = 0 for almost all t ∈ j. By assumption this implies that f(t) = 0 for t ∈ j. The converse statement is trivial. -

The following result is an alternative useful version of Lemma 7.5.2 in terms of a fundamental matrix Y (·, λ).

**Corollary 7.5.3.** Let <sup>Y</sup> (·, λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, be a fundamental matrix for (7.2.3) and let I ⊂ ı be a compact interval. Then the system (7.2.3) is definite on I if and only if the 2 × 2 matrix

$$\int\_{I} Y(s,\lambda)^{\*} \Delta(s) Y(s,\lambda) \, ds \tag{7.5.2}$$

is invertible for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup>.

Proof. Assume that (7.2.3) is definite on I. If the (nonnegative) matrix in (7.5.2) is not invertible, then there exists a nontrivial <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> for which

$$\gamma^\* \left( \int\_I Y(s,\lambda)^\* \Delta(s) Y(s,\lambda) \, ds \right) \gamma = 0,\tag{7.5.3}$$

or alternatively Δ(t)Y (t, λ)γ = 0 for t ∈ I; cf. (7.5.1). Since Y (·, λ)γ is a solution of Jf- −Hf = λΔf, it follows from the definiteness that Y (t, λ)γ = 0 for t ∈ I, which implies γ = 0. This contradiction shows that the matrix in (7.5.2) is invertible.

Conversely, assume that the (nonnegative) matrix in (7.5.2) is invertible. In order to show that (7.2.3) is definite, let

$$Jf'(t) - H(t)f(t) = \lambda \Delta(t)f(t), \quad \Delta(t)f(t) = 0, \quad t \in I.$$

Since Y (·, λ) is a fundamental matrix of Jf- − Hf = λΔf, every solution of this equation can be written in the form <sup>f</sup> <sup>=</sup> <sup>Y</sup> (·, λ)<sup>γ</sup> with a unique <sup>γ</sup> <sup>∈</sup> <sup>C</sup>2. The condition Δ(t)f(t) = 0, t ∈ I, implies that (7.5.3) holds. Therefore, γ = 0 and thus the system (7.2.3) is definite. -

The next proposition shows that there is no difference between global definiteness and local definiteness.

**Proposition 7.5.4.** The canonical system (7.2.3) is definite on ı if and only if there exists a compact interval I ⊂ ı such that the canonical system (7.2.3) is definite on the interval I.

Proof. If the canonical system (7.2.3) is definite on the interval I, then it is clearly definite on the larger interval ı.

To see the converse statement, let the canonical system (7.2.3) be definite on the interval ı; in other words, assume that for each solution f of Jf- − Hf = 0 on ı one has

$$
\Delta(t)f(t) = 0, \quad t \in \iota \quad \Rightarrow \quad f(t) = 0, \quad t \in \iota.
$$

Introduce for each compact subinterval K of ı the subset d(K) of C<sup>2</sup> by

$$d(K) = \left\{ \phi \in \mathbb{C}^2 \, : \, |\phi| = 1, \int\_K \phi^\* Y(s, 0)^\* \Delta(s) Y(s, 0) \phi \, ds = 0 \right\}.$$

Clearly, <sup>d</sup>(K) is compact and <sup>K</sup> <sup>⊂</sup> <sup>K</sup> implies <sup>d</sup>(K) <sup>⊂</sup> <sup>d</sup>(K). Now choose an increasing sequence of compact intervals (Kn) such that their union equals the interval ı. Then

$$\bigcap\_{n \in \mathbb{N}} d(K\_n) = \emptyset. \tag{7.5.4}$$

Indeed, assume that there exists an element <sup>φ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> with <sup>|</sup>φ<sup>|</sup> = 1, such that

$$\int\_{K\_n} \phi^\* Y(s,0)^\* \Delta(s) Y(s,0) \phi \, ds = 0$$

for every <sup>n</sup> <sup>∈</sup> <sup>N</sup>. Then, by monotone convergence,

$$\int\_{\mathfrak{u}} \phi^\* Y(s,0)^\* \Delta(s) Y(s,0) \phi \, ds = 0.$$

As the canonical system (7.2.3) is definite, this implies by (7.5.1) that Y (·, 0)φ = 0, which leads to φ = 0; a contradiction. Therefore, the identity (7.5.4) is valid. Since each of the sets d(Kn) in (7.5.4) is compact, it follows that there exists a compact interval K<sup>m</sup> such that d(Km) = ∅. Hence, I = K<sup>m</sup> satisfies the requirements. To see this, let Jf- − Hf = 0 on K<sup>m</sup> and assume that Δ(t)f(t) = 0, t ∈ Km, or, equivalently, <sup>K</sup><sup>m</sup> f(s)∗Δ(s)f(s) = 0; cf. (7.5.1). Since d(Km) = ∅ one concludes that f = 0. -

In the rest of the text one often speaks of definite systems in the following sense.

**Definition 7.5.5.** The canonical system (7.2.3) is said to be definite if it is definite on ı.

The next result is about smoothly cutting off the solution of a definite canonical system (7.2.3) near an endpoint of the interval ı, i.e., modifying the solution so that it becomes trivial in a neighborhood of that endpoint. The following proposition and corollary will be used in Section 7.6.

**Proposition 7.5.6.** Let the canonical system (7.2.3) be definite and choose a compact interval [α, β] <sup>⊂</sup> <sup>ı</sup> such that the system is definite on [α, β]. Let <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) and let f ∈ AC(ı) be a solution of the inhomogeneous equation (7.2.3) for some <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Then there exist functions <sup>f</sup><sup>a</sup> <sup>∈</sup> AC(ı) and <sup>g</sup><sup>a</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) satisfying

$$Jf\_a'(t) - H(t)f\_a(t) = \lambda \Delta(t)f\_a(t) + \Delta(t)g\_a(t)$$

such that

$$f\_a(t) = \begin{cases} f(t), & t \in (a, \alpha], \\ 0, & t \in [\beta, b), \end{cases} \quad and \quad g\_a(t) = \begin{cases} g(t), & t \in (a, \alpha], \\ 0, & t \in [\beta, b). \end{cases}$$

Similarly, there exist functions <sup>f</sup><sup>b</sup> <sup>∈</sup> AC(ı) and <sup>g</sup><sup>b</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>,loc (ı) satisfying

$$Jf\_b'(t) - H(t)f\_b(t) = \lambda \Delta(t)f\_b(t) + \Delta(t)g\_b(t)$$

such that

$$f\_b(t) = \begin{cases} 0, & t \in (a, \alpha], \\ f(t), & t \in [\beta, b), \end{cases} \quad and \quad g\_b(t) = \begin{cases} 0, & t \in (a, \alpha], \\ g(t), & t \in [\beta, b). \end{cases}$$

Proof. Let the functions f and g be as indicated. The result will be proved for the functions f<sup>b</sup> and gb; the proof for the functions f<sup>a</sup> and g<sup>a</sup> is similar.

Let [α, β] ⊆ ı be a compact interval on which the canonical system (7.2.3) is definite; cf. Proposition 7.5.4. Let <sup>k</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(α, β) and fix a fundamental system Y (·, λ) by the initial condition Y (α, λ) = I. According to (7.2.15), the function defined by

$$h(t) = -Y(t, \lambda) \int\_{\alpha}^{t} JY(s, \overline{\lambda})^\* \Delta(s) k(s) \, ds \tag{7.5.5}$$

satisfies the inhomogeneous equation

$$Jh'(t) - H(t)h(t) = \lambda \Delta(t)h(t) + \Delta(t)k(t), \quad \alpha < t < \beta,$$

and in the endpoints it has the values

$$h(\alpha) = 0 \quad \text{and} \quad h(\beta) = -Y(\beta, \lambda) \int\_{\alpha}^{\beta} JY(s, \overline{\lambda})^\* \Delta(s) k(s) \, ds.$$

It will be shown that there exists a function <sup>k</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(α, β) such that h(β) = f(β).

In order to verify this, observe that Y (β,λ) is invertible and that the integral operator

$$\ell \mapsto \int\_{\alpha}^{\beta} JY(s,\overline{\lambda})^\* \Delta(s) \ell(s) \, ds$$

taking L<sup>2</sup> <sup>Δ</sup>(α, β) into <sup>C</sup><sup>2</sup> is surjective. To see this, assume that <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> is orthogonal to the range of this integral operator, that is,

$$0 = \gamma^\* \int\_{\alpha}^{\beta} JY(s, \overline{\lambda})^\* \Delta(s) \ell(s) \, ds = \int\_{\alpha}^{\beta} (Y(s, \overline{\lambda}) J^\* \gamma)^\* \Delta(s) \ell(s) \, ds$$

for all <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(α, β). With (s) = Y (s, λ)J∗γ it then follows from (7.5.1) and Lemma 7.5.2 that (s) = 0 for s ∈ (α, β), which implies that γ = 0. Thus, the integral operator is surjective.

Now choose <sup>k</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(α, β) as above, so that h defined by (7.5.5) satisfies h(α) = 0 and h(β) = f(β). Hence, the functions f<sup>b</sup> and g<sup>b</sup> defined by

$$f\_b(t) = \begin{cases} 0, & t \in (a, \alpha], \\ h(t), & t \in (\alpha, \beta), \\ f(t), & t \in [\beta, b), \end{cases} \quad \text{and} \quad g\_b(t) = \begin{cases} 0, & t \in (a, \alpha], \\ k(t), & t \in (\alpha, \beta), \\ g(t), & t \in [\beta, b), \end{cases}$$

satisfy the appropriate inhomogeneous canonical equations on (a, α), (α, β), and (β, b). Since <sup>f</sup>b(α) = <sup>h</sup>(α) and <sup>f</sup>b(β) = <sup>h</sup>(β) it follows that <sup>f</sup><sup>b</sup> <sup>∈</sup> AC(ı). -

In particular, if f is a solution of the homogeneous system (7.2.4), then f can be localized as indicated above. The following restatement of this fact in terms of matrix functions (groupings of column vector functions) is useful. Note that the modification of the solutions of the homogeneous equation involves a solution of the inhomogeneous equation.

**Corollary 7.5.7.** Let the canonical system (7.2.3) be definite and choose a compact interval [α, β] ⊂ ı such that the system is definite on [α, β]. Let Y (·, λ) be a fundamental matrix of (7.2.4). Then there exist a 2×2 matrix function Ya(·, λ) ∈ AC(ı) and a <sup>2</sup> <sup>×</sup> <sup>2</sup> matrix function <sup>Z</sup>a(·, λ) whose columns belong to <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı), satisfying

$$JY\_a'(t, \lambda) - H(t)Y\_a(t, \lambda) = \lambda \Delta(t)Y\_a(t, \lambda) + \Delta(t)Z\_a(t, \lambda)$$

such that

$$Y\_a(t, \lambda) = \begin{cases} Y(t, \lambda), & t \in (a, \alpha], \\ 0, & t \in [\beta, b), \end{cases} \quad and \quad Z\_a(t, \lambda) = \begin{cases} 0, & t \in (a, \alpha], \\ 0, & t \in [\beta, b). \end{cases}$$

Similarly, there exist a 2 × 2 matrix function Yb(·, λ) ∈ AC(ı) and a 2 × 2 matrix function <sup>Z</sup>b(·, λ) whose columns belong to <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı), satisfying

$$JY\_b'(t, \lambda) - H(t)Y\_b(t, \lambda) = \lambda \Delta(t)Y\_b(t, \lambda) + \Delta(t)Z\_b(t, \lambda)$$

such that

$$Y\_b(t, \lambda) = \begin{cases} 0, & t \in (a, \alpha], \\ Y(t, \lambda), & t \in [\beta, b), \end{cases} \quad and \quad Z\_b(t, \lambda) = \begin{cases} 0, & t \in (a, \alpha], \\ 0, & t \in [\beta, b). \end{cases}$$

With <sup>φ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> observe that the function <sup>Y</sup>a(·, λ)<sup>φ</sup> belongs to <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) if and only if Y (·, λ)φ is square-integrable with respect to Δ at a, and, likewise, that the function <sup>Y</sup>b(·, λ)<sup>φ</sup> belongs to <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) if and only if Y (·, λ)φ is square-integrable with respect to Δ at b.

It is useful to have a special notation for the elements that modify the pairs {Y (·, λ), λY (·, λ)} in Corollary 7.5.7. Define the matrix functions Ya(·, λ) and Yb(·, λ) by

$$\begin{aligned} \mathcal{Y}\_a(\cdot,\lambda) &:= \left\{ Y\_a(\cdot,\lambda), \lambda Y\_a(\cdot,\lambda) + Z\_a(\cdot,\lambda) \right\}, \\ \mathcal{Y}\_b(\cdot,\lambda) &:= \left\{ Y\_b(\cdot,\lambda), \lambda Y\_b(\cdot,\lambda) + Z\_b(\cdot,\lambda) \right\}, \end{aligned} \tag{7.5.6}$$

that is, for <sup>φ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> one has

$$\begin{aligned} \mathbb{Y}\_a(\cdot,\lambda)\phi &= \left\{ Y\_a(\cdot,\lambda)\phi, \lambda Y\_a(\cdot,\lambda)\phi + Z\_a(\cdot,\lambda)\phi \right\}, \\ \mathbb{Y}\_b(\cdot,\lambda)\phi &= \left\{ Y\_b(\cdot,\lambda)\phi, \lambda Y\_b(\cdot,\lambda)\phi + Z\_b(\cdot,\lambda)\phi \right\}. \end{aligned}$$

Note that Ya(·, λ) and Yb(·, λ) satisfy

$$\begin{aligned} \mathcal{Y}\_a(t,\lambda) &= \begin{cases} \{Y(t,\lambda), \lambda Y(t,\lambda)\}, & a < t \le \alpha, \\ \{0, 0\}, & \beta \le t < b, \end{cases} \\ \mathcal{Y}\_b(t,\lambda) &= \begin{cases} \{0, 0\}, & a < t \le \alpha, \\ \{Y(t,\lambda), \lambda Y(t,\lambda)\}, & \beta \le t < b. \end{cases} \end{aligned} \tag{7.5.7}$$

It is clear from the construction that the columns of Ya(·, λ) or Yb(·, λ) are squareintegrable on (a, b) with respect to Δ if and only if the corresponding columns of Y (·, λ) have this property at a or b, respectively.

## **7.6 Maximal and minimal relations for canonical systems**

In this and later sections it will be assumed that the canonical system (7.2.3) is real as in Definition 7.2.7 and definite as in Definition 7.5.5: such systems will be called real definite canonical systems. In this context the central Hilbert space will be L<sup>2</sup> <sup>Δ</sup>(ı), in which the maximal and minimal relations associated with the real definite canonical system (7.2.3) will be defined. In principle, both these relations may be multivalued. The results from Section 7.4 and Section 7.5 make it possible to consider the limit-circle case and the limit-point case from the point of view of the maximal and minimal relations.

The real definite canonical system (7.2.3) induces the maximal relation Tmax in L<sup>2</sup> <sup>Δ</sup>(ı) defined by

$$T\_{\max} = \left\{ \{f, g\} \in L^2\_{\Delta}(\iota) \times L^2\_{\Delta}(\iota) : Jf' - Hf = \Delta g\right\}.$$

Since the elements of L<sup>2</sup> <sup>Δ</sup>(ı) are equivalence classes, the definition of Tmax needs the following explanation: an element {f,g} ∈ <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) <sup>×</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) belongs to Tmax if and only if the equivalence class f contains an absolutely continuous representative f such that the inhomogeneous equation Jf - (t) − H(t)f (t) = Δ(t)g(t) is satisfied for almost every <sup>t</sup> <sup>∈</sup> <sup>ı</sup>. Here <sup>g</sup> is any representative of <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı); observe that the function Δ(t)g(t) is independent of the representative. The above argument also shows that the relation Tmax is linear.

Since the canonical system (7.2.3) is assumed to be definite, the absolutely continuous representative is unique.

**Lemma 7.6.1.** If {f,g} ∈ Tmax , then the equivalence class f has a unique absolutely continuous representative.

Proof. Let {f,g} ∈ Tmax and let f <sup>1</sup> and f <sup>2</sup> be absolutely continuous representatives of f. Then J(f <sup>1</sup> − f 2)- − H(f <sup>1</sup> − f <sup>2</sup>) = 0 holds and

$$
\Delta(t)(\tilde{f}\_1 - \tilde{f}\_2)(t) = 0, \quad t \in \iota.
$$

Therefore, by Definition 7.5.5, it follows that f <sup>1</sup>(t) = f <sup>2</sup>(t) for all <sup>t</sup> <sup>∈</sup> <sup>ı</sup>. -

It will be shown that Tmax is the adjoint of a symmetric relation whose defect numbers are equal and at most (2, 2). Let T<sup>0</sup> be the preminimal relation, i.e., the restriction of the maximal relation Tmax to the elements where the first component has compact support in ı:

$$T\_0 = \left\{ \{f, g\} \in T\_{\text{max}} \, : \, f \text{ has compact support} \right\}.$$

More precisely, an element {f,g} ∈ <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) <sup>×</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) belongs to T<sup>0</sup> if and only if the equivalence class f contains an absolutely continuous representative f with compact support such that the inhomogeneous equation Jf - (t) − H(t)f (t) = Δ(t)g(t) is satisfied for almost every <sup>t</sup> <sup>∈</sup> <sup>ı</sup>. Here <sup>g</sup> is any representative of <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). The minimal relation Tmin is defined as Tmin = T0.

**Theorem 7.6.2.** The closure Tmin = T<sup>0</sup> of T<sup>0</sup> is a closed symmetric relation in L2 <sup>Δ</sup>(ı) and it satisfies

$$T\_{\min} \subset (T\_{\min})^\* = T\_{\max},$$

and, consequently, Tmin = (Tmax )∗.

Proof. Step 1. It will be shown that

$$T\_{\text{max}} \subset (T\_0)^\*.\tag{7.6.1}$$

For this purpose, let {f,g} ∈ Tmax , {h, k} ∈ T0, and choose an interval [α, β] ⊂ ı containing the support h (and hence the support of Δk). Then

$$\begin{aligned} (g,h)\_\Delta - (f,k)\_\Delta &= \int\_\alpha^\beta h(s)^\* \Delta(s) g(s) \, ds - \int\_\alpha^\beta k(s)^\* \Delta(s) f(s) \, ds \\ &= h(\beta)^\* Jf(\beta) - h(\alpha)^\* Jf(\alpha) \\ &= 0 \end{aligned}$$

by Corollary 7.2.3. Here (·, ·)<sup>Δ</sup> denotes the scalar product in <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı), f and h are the uniquely defined absolutely continuous representatives, while g and k are arbitrary representatives. Observe that the integral does not depend on the particular choice of g and k. This shows {f,g} ∈ (T0)<sup>∗</sup> and hence (7.6.1) follows.

Step 2. It will be shown that

$$(T\_0)^\* \subset T\_{\text{max}}\,. \tag{7.6.2}$$

For this, let {f,g} ∈ (T0)∗. By Theorem 7.2.1, there exists a nontrivial absolutely continuous function u on ı such that Ju- − Hu = Δg. The aim is to show that for any representative f the difference f − u is absolutely continuous modulo an element whose L<sup>2</sup> <sup>Δ</sup>(a, b)-norm is zero. Recall that the system is assumed to be definite, and hence there exists a compact interval [α0, β0] on which it is definite; cf. Proposition 7.5.4. Choose an interval [α1, β1] ⊂ ı which contains [α0, β0]; then the system is also definite on [α1, β1].

It is convenient to introduce the subspace

$$\mathfrak{M}\_1 := \left\{ k \in \mathcal{L}^2\_{\Delta}(\alpha\_1, \beta\_1) : \begin{aligned} &Jh' - Hh = \Delta k \text{ for some } h \in AC[\alpha\_1, \beta\_1] \\ &\text{such that } h(\alpha\_1) = h(\beta\_1) = 0 \end{aligned} \right\}.$$

Let k ∈ M<sup>1</sup> and let h ∈ AC[α1, β1] be a solution of Jh- − Hh = Δk for which h(α1) = h(β1) = 0. It follows from (7.2.15) that

$$h(t) = Y(t,0)J^{-1} \int\_{\alpha\_1}^{t} Y(s,0)^\* \Delta(s) k(s) \, ds, \quad t \in [\alpha\_1, \beta\_1],\tag{7.6.3}$$

where the fundamental matrix Y (·, λ) is fixed by Y (α1, λ) = I. Note that the condition h(β1) = 0 implies

$$\int\_{\alpha\_1}^{\beta\_1} Y(s,0)^\* \Delta(s) k(s) \, ds = 0. \tag{7.6.4}$$

Conversely, if <sup>k</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(α1, β1) satisfies (7.6.4), then k ∈ M<sup>1</sup> since h ∈ AC[α1, β1] in (7.6.3) satisfies Jh-−Hh = Δk and h(α1) = h(β1) = 0. In other words, one has

$$\mathfrak{M}\_1 = \left\{ k \in \mathcal{L}^2\_{\Delta}(\alpha\_1, \beta\_1) : \int\_{\alpha\_1}^{\beta\_1} Y(s, 0)^\* \Delta(s) k(s) \, ds = 0 \right\}.$$

Now let k ∈ M<sup>1</sup> and let h be defined by (7.6.3). Then the pair of functions {h, k} can be trivially extended to all of ı and the extended pair, which will also be denoted by {h, k}, belongs to T0. As {f,g} ∈ (T0)∗, one has (h, g)<sup>Δ</sup> = (k, f)Δ, and since the supports of h and k are inside [α1, β1] it follows that

$$\int\_{\alpha\_1}^{\beta\_1} g(s)^\* \Delta(s) h(s) \, ds = \int\_{\alpha\_1}^{\beta\_1} f(s)^\* \Delta(s) k(s) \, ds. \tag{7.6.5}$$

Consider the pair {u, g} on [α1, β1]. Note that on this interval u is absolutely continuous and g is square-integrable with respect to Δ. It follows from the Lagrange identity in Corollary 7.2.3 applied to the pairs {h, k} and {u, g}, and h(α1) = h(β1) = 0 that

$$\int\_{\alpha\_1}^{\beta\_1} g(s)^\* \Delta(s) h(s) \, ds = \int\_{\alpha\_1}^{\beta\_1} u(s)^\* \Delta(s) k(s) \, ds. \tag{7.6.6}$$

Combining (7.6.5) and (7.6.6), one obtains that

$$\int\_{\alpha\_1}^{\beta\_1} (f(s) - u(s))^\* \Delta(s) k(s) \, ds = 0$$

for all k ∈ M1. In other words, the restriction of f − u to [α1, β1] is orthogonal to M<sup>1</sup> in the semidefinite Hilbert space L<sup>2</sup> <sup>Δ</sup>(α1, β1). Furthermore, by (7.6.4) one has that <sup>Y</sup> (·, 0)<sup>γ</sup> is orthogonal to <sup>M</sup><sup>1</sup> in <sup>L</sup><sup>2</sup> <sup>Δ</sup>(α1, β1) for all <sup>γ</sup> <sup>∈</sup> <sup>C</sup>2, and since the same is true for <sup>f</sup> <sup>−</sup> <sup>u</sup> it follows that <sup>f</sup> <sup>−</sup> <sup>u</sup> <sup>−</sup> <sup>Y</sup> (·, 0)<sup>γ</sup> is orthogonal to <sup>M</sup><sup>1</sup> in <sup>L</sup><sup>2</sup> <sup>Δ</sup>(α1, β1) for all <sup>γ</sup> <sup>∈</sup> <sup>C</sup>2.

Next it will be shown that for some <sup>γ</sup><sup>1</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> the function <sup>f</sup> <sup>−</sup> <sup>u</sup> <sup>−</sup> <sup>Y</sup> (·, 0)γ<sup>1</sup> belongs to <sup>M</sup>1. In fact, first of all it is clear from (7.2.15) that for any <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup>

$$h(t) = Y(t,0)J^{-1} \int\_{\alpha\_1}^t Y(s,0)^\* \Delta(s) \left( f(s) - u(s) - Y(s,0)\gamma \right) ds$$

satisfies Jh- − Hh = Δ(f − u − Y (·, 0)γ) and h(α1) = 0. To satisfy the boundary condition <sup>h</sup>(β1) = 0, choose <sup>γ</sup> <sup>=</sup> <sup>γ</sup><sup>1</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> such that

$$\int\_{\alpha\_1}^{\beta\_1} Y(s,0)^\* \Delta(s) (f(s) - u(s)) \, ds = \int\_{\alpha\_1}^{\beta\_1} Y(s,0)^\* \Delta(s) Y(s,0) \gamma\_1 \, ds;$$

this is possible since the system is definite on [α1, β1] and hence the matrix

$$\int\_{\alpha\_1}^{\beta\_1} Y(s,0)^\* \Delta(s) Y(s,0) \, ds$$

is invertible; cf. Corollary 7.5.3. Therefore, f − u − Y (·, 0)γ<sup>1</sup> ∈ M1. Since the element <sup>f</sup> <sup>−</sup> <sup>u</sup> <sup>−</sup> <sup>Y</sup> (·, 0)γ<sup>1</sup> is orthogonal to <sup>M</sup><sup>1</sup> in <sup>L</sup><sup>2</sup> <sup>Δ</sup>(α1, β1), this yields

$$\int\_{\alpha\_1}^{\beta\_1} \left( f(s) - u(s) - Y(s, 0)\gamma\_1 \right)^\* \Delta(s) \left( f(s) - u(s) - Y(s, 0)\gamma\_1 \right) ds = 0, 1$$

and hence there exists a function ω<sup>1</sup> on [α1, β1] such that

$$f(s) = u(s) + Y(s, 0)\gamma\_1 + \omega\_1(s), \quad \Delta(s)\omega\_1(s) = 0, \quad s \in [\alpha\_1, \beta\_1].$$

Likewise, on any interval [α2, β2] extending [α1, β1] the same argument shows that there exist <sup>γ</sup><sup>2</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> and a function <sup>ω</sup><sup>2</sup> such that

$$f(s) = u(s) + \dot{Y}(s, 0)\gamma\_2 + \omega\_2(s), \quad \Delta(s)\omega\_2(s) = 0, \quad s \in [\alpha\_2, \beta\_2];$$

here the fundamental matrix <sup>Y</sup>(·, λ) is fixed by <sup>Y</sup>(α2, λ) = <sup>I</sup>. Hence, on the smaller interval one obtains for s ∈ [α1, β1]

$$
\omega\_1(s) - \omega\_2(s) = \tilde{Y}(s,0)\gamma\_2 - Y(s,0)\gamma\_1, \quad \Delta(s)(\omega\_1(s) - \omega\_2(s)) = 0.
$$

Since the system is definite on the interval [α1, β1], this shows that ω1(s) = ω2(s) and <sup>Y</sup> (s, 0)γ<sup>1</sup> <sup>=</sup> <sup>Y</sup>(s, 0)γ<sup>2</sup> for <sup>s</sup> <sup>∈</sup> [α1, β1], and hence <sup>Y</sup> (s, 0)γ<sup>1</sup> <sup>=</sup> <sup>Y</sup>(s, 0)γ<sup>2</sup> for s ∈ ı. One concludes that there exists a function ω such that

$$f(s) = u(s) + Y(s,0)\gamma\_1 + \omega(s), \quad \Delta(s)\omega(s) = 0, \quad s \in \iota.$$

Thus, the functions f and u + Y (·, 0)γ<sup>1</sup> belong to the same equivalence class in L2 <sup>Δ</sup>(ı). Since J(u+Y (·, 0)γ1)- −H(u+Y (·, 0)γ1)=Δg it follows that {f,g} ∈ Tmax and u + Y (·, 0)γ<sup>1</sup> is the unique absolutely continuous representative of f. This implies (7.6.2).

Step 3. It follows from (7.6.1) and (7.6.2) that Tmax = (T0)<sup>∗</sup> and, in particular, this implies that Tmax is closed. Hence, the fact that T<sup>0</sup> ⊂ Tmax and the definition Tmin = T<sup>0</sup> imply that

$$T\_{\min} = \overline{T}\_0 \subset T\_{\max} = (T\_0)^\* = (T\_{\min})^\*.$$

Thus, Tmin is a (closed) symmetric relation and Tmin = (Tmax )∗. -

At this stage note that Tmin is a closed symmetric relation which need not be densely defined in L<sup>2</sup> <sup>Δ</sup>(ı). Consider the orthogonal decomposition

$$L^2\_{\Delta}(\iota) = (\operatorname{mul} T\_{\min})^\perp \oplus \operatorname{mul} T\_{\min} = \overline{\operatorname{dom}} T\_{\max} \oplus \operatorname{mul} T\_{\min} \tag{7.6.7}$$

and recall from Theorem 1.4.11 that Tmin admits the corresponding orthogonal sum decomposition

$$T\_{\rm min} = (T\_{\rm min})\_{\rm op} \stackrel{\widehat{\oplus}}{\oplus} \{ \{ 0 \} \times \text{mult} \, T\_{\rm min} \}. \tag{7.6.8}$$

The operator part (Tmin )op is not necessarily densely defined in dom Tmax and {0} × mul Tmin is the purely multivalued self-adjoint relation in mul Tmin .

Since by Theorem 7.6.2 the relation Tmin is closed and symmetric, while (Tmin )<sup>∗</sup> = Tmax , it follows from the von Neumann decomposition, as given in Theorem 1.7.11, that the relation Tmax has the componentwise sum decomposition

$$T\_{\text{max}} = T\_{\text{min}} \stackrel{\frown}{+} \widehat{\mathfrak{N}}\_{\lambda}(T\_{\text{max}}) \stackrel{\frown}{+} \widehat{\mathfrak{N}}\_{\mu}(T\_{\text{max}}), \quad \lambda \in \mathbb{C}^{+}, \,\mu \in \mathbb{C}^{-},$$

where the sums are direct. Now assume that <sup>f</sup> <sup>∈</sup> <sup>N</sup><sup>ζ</sup> (Tmax ) with <sup>ζ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then {f, ζf} ∈ <sup>T</sup>max , which is equivalent to <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) having an absolutely continuous representative f such that Jf - − Hf <sup>=</sup> <sup>ζ</sup>Δ<sup>f</sup> . Since the system consists of 2 × 2 matrix functions, there are precisely two linearly independent solutions of this homogeneous equation and at most two linearly independent solutions that are square-integrable with respect to Δ. Furthermore, since the canonical system is assumed to be real, the number of solutions at <sup>ζ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> that are squareintegrable with respect to Δ coincides with the number of solutions at <sup>ζ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> that are square-integrable with respect to Δ by Corollary 7.2.9. Taking into account Corollary 7.4.5 and Corollary 7.4.6 one then obtains the following statement. The case where both endpoints of the interval ı are in the limit-point case will be dealt with in Corollary 7.6.9.

**Corollary 7.6.3.** Let Tmin be the minimal symmetric relation associated with the real definite canonical system (7.2.3) in L<sup>2</sup> <sup>Δ</sup>(ı). Then the following statements hold:


Recall that elements {f,g} ∈ Tmax satisfy the equation Jf- − Hf = Δg and that the entries also satisfy the integrability condition f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> Δ(ı). These two ingredients make it possible to extend the usual Lagrange identity in Corollary 7.2.3 on a compact subinterval to all of ı. This new Lagrange identity for the elements in Tmax will play an important role in the rest of this chapter.

**Lemma 7.6.4.** Let Tmax be the maximal relation associated with the real definite canonical system (7.2.3) in L<sup>2</sup> <sup>Δ</sup>(ı). Then for all {f,g}, {h, k} ∈ Tmax one has

$$h(g,h)\_{\Delta} - (f,k)\_{\Delta} = \lim\_{t \to b} h(t)^{\*} Jf(t) - \lim\_{t \to a} h(t)^{\*} Jf(t),\tag{7.6.9}$$

where f(t) and h(t) denote the values of the unique absolutely continuous representatives of f and h, respectively.

Proof. First observe that for all elements {f,g}, {h, k} ∈ Tmax and every compact subinterval [α, β] ⊂ ı one has

$$\int\_{\alpha}^{\beta} \left( h(s)^{\*} \Delta(s) g(s) - k(s)^{\*} \Delta(s) f(s) \right) ds = h(\beta)^{\*} Jf(\beta) - h(\alpha)^{\*} Jf(\alpha)$$

by the Lagrange identity in Corollary 7.2.3; here f(t) and h(t) denote the values of the unique absolutely continuous representatives of f and h, and g(t) and k(t) are the values of some representatives of g and k. Observe that the integral on the left-hand side does not depend on the choice of the representatives of g and k. The limit of the left-hand side exists as β → b and α → a, respectively, since f, g, h, k <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). As a consequence, one sees that each of the limits

$$\lim\_{t \to a} h(t)^\* Jf(t) \quad \text{and} \quad \lim\_{t \to b} h(t)^\* Jf(t)$$

exists and hence the Lagrange identity takes the limit form (7.6.9). -

**Remark 7.6.5.** Observe that in (7.6.9) one uses the values f(t) and h(t) of the unique absolutely continuous representatives of f and h in dom Tmax , respectively. For instance, if {0, g} ∈ Tmax and {h, k} ∈ Tmax , then there exists an absolutely continuous function f such that Δ(t)<sup>f</sup> (t) = 0 and Jf - (t) − H(t)f (t) = Δ(t)g(t) is satisfied for almost every <sup>t</sup> <sup>∈</sup> <sup>ı</sup>, where <sup>g</sup> is any representative of <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). In this situation the identity (7.6.9) has the form

$$((g,h)\_\Delta - (0,k)\_\Delta = \lim\_{t \to b} h(t)^\* J\tilde{f}(t) - \lim\_{t \to a} h(t)^\* J\tilde{f}(t).$$

The elements in the minimal symmetric relation Tmin = T<sup>0</sup> can be easily characterized in terms of these limits.

**Corollary 7.6.6.** Let Tmin and Tmax be the minimal and maximal relations associated with the real definite canonical system (7.2.3) in L<sup>2</sup> <sup>Δ</sup>(ı) and let {f,g} ∈ Tmax . Then {f,g} ∈ Tmin if and only if

$$\lim\_{t \to a} h(t)^\* Jf(t) = 0 \quad \text{and} \quad \lim\_{t \to b} h(t)^\* Jf(t) = 0 \tag{7.6.10}$$

for all {h, k} ∈ Tmax , where f(t) and h(t) denote the values of the unique absolutely continuous representatives of f and h, respectively.

Proof. Observe first that since Tmin = (Tmax )<sup>∗</sup> one has {f,g} ∈ Tmin if and only if (g, h)<sup>Δ</sup> = (f, k)<sup>Δ</sup> for all {h, k} ∈ Tmax . Hence, it follows from the Lagrange identity (7.6.9) that {f,g} ∈ Tmin if and only if

$$\lim\_{t \to b} h(t)^\* Jf(t) = \lim\_{t \to a} h(t)^\* Jf(t) \tag{7.6.11}$$

for all {h, k} ∈ Tmax . To see that for {f,g} ∈ Tmin each of the limits in (7.6.11) is zero, consider {h, k} ∈ Tmax and use Proposition 7.5.6 (with λ = 0) to obtain an element {ha, ka} ∈ Tmax that coincides with {h, k} in a neighborhood of a and with {0, 0} in a neighborhood of b. Then (7.6.11) implies

$$\lim\_{t \to a} h(t)^\* Jf(t) = \lim\_{t \to a} h\_a(t)^\* Jf(t) = \lim\_{t \to b} h\_a(t)^\* Jf(t) = 0,$$

and hence (7.6.10) follows together with (7.6.11). Conversely, if (7.6.10) holds for some {f,g} ∈ Tmax and all {h, k} ∈ Tmax , then the identity (7.6.11) holds for all {h, k} ∈ <sup>T</sup>max and hence {f,g} ∈ <sup>T</sup>min . -

The main difficulty when dealing with the boundary value problems associated with the system (7.2.3) is to break the limits in (7.6.9) and in (7.6.10) into limits of the separate factors. The case where the endpoints are regular, quasiregular, or in the limit-circle case will be pursued in Section 7.7. If one of the endpoints is in the limit-point case, the situation is somewhat simpler, since one of the limits in (7.6.9) automatically vanishes, as will be shown now. A further discussion of the remaining limit will be pursued in Section 7.8.

**Lemma 7.6.7.** Let Tmax be the maximal relation associated with the real definite canonical system (7.2.3) in L<sup>2</sup> <sup>Δ</sup>(ı), let <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, and let <sup>Y</sup>a(·, λ) and <sup>Y</sup>b(·, λ) be as in (7.5.6). Then the following statements hold:

(i) Let a be a regular or quasiregular endpoint and let b be in the limit-point case. Then for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>

$$T\_{\text{max}} = T\_{\text{min}} \stackrel{\frown}{+} \{ \mathbb{Y}\_a(\cdot, \lambda) \phi : \phi \in \mathbb{C}^2 \},\tag{7.6.12}$$

where the sum is direct.

(ii) Let b be a regular or quasiregular endpoint and let a be in the limit-point case. Then for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>

$$T\_{\max} = T\_{\min} \, \widehat{+} \left\{ \mathbb{Y}\_b(\cdot, \lambda) \phi : \, \phi \in \mathbb{C}^2 \right\},$$

where the sum is direct.

Proof. It suffices to consider the case (i) since the case (ii) can be proved in a similar way. Let Y (·, λ) be a fundamental matrix of (7.2.4). Note that if a is regular or quasiregular, then it follows from Corollary 7.3.3 and (7.5.7) that <sup>Y</sup>a(·, λ)<sup>φ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) <sup>×</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı), and Corollary 7.5.7 implies that Ya(·, λ)φ ∈ Tmax for all <sup>φ</sup> <sup>∈</sup> <sup>C</sup>2.

As Tmin ⊂ Tmax , it is clear that the right-hand side of (7.6.12) is contained in Tmax . By assumption and Corollary 7.6.3 (ii) Tmax is a two-dimensional extension of <sup>T</sup>min and hence it suffices to show that the elements <sup>Y</sup>a(·, λ)φ, <sup>φ</sup> <sup>∈</sup> <sup>C</sup>2, span a two-dimensional subspace of Tmax which has a trivial intersection with Tmin . In other words, it remains to check that Ya(·, λ)φ ∈ Tmin if and only if φ = 0. Suppose that <sup>Y</sup>a(·, λ)<sup>φ</sup> <sup>∈</sup> <sup>T</sup>min for some <sup>φ</sup> <sup>∈</sup> <sup>C</sup>2. For all <sup>ψ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> one has <sup>Y</sup>a(·, λ)<sup>ψ</sup> <sup>∈</sup> <sup>T</sup>max and therefore, by Corollary 7.6.6,

$$0 = \lim\_{t \to a} \psi^\* Y\_a(t, \overline{\lambda})^\* J Y\_a(t, \lambda) \phi.$$

Since Ya(·, λ) = Y (·, λ) and Ya(·, λ) = Y (·, λ) in a neighborhood of a, it follows that

$$0 = \lim\_{t \to a} \psi^\* Y(t, \overline{\lambda})^\* J Y(t, \lambda) \phi$$

for all <sup>ψ</sup> <sup>∈</sup> <sup>C</sup>2. Fix the fundamental matrix <sup>Y</sup> (·, λ) by <sup>Y</sup> (a, λ) = <sup>I</sup>. This leads to <sup>ψ</sup>∗Jφ = 0 for all <sup>ψ</sup> <sup>∈</sup> <sup>C</sup>2, which implies <sup>φ</sup> = 0. Hence, the right-hand side of (7.6.12) is a two-dimensional extension of Tmin which is contained in Tmax , and therefore coincides with Tmax . -

In the next lemma the case of a singular endpoint in the limit-point case is discussed.

**Lemma 7.6.8.** Let Tmax be the maximal relation associated with the real definite canonical system (7.2.3) in L<sup>2</sup> <sup>Δ</sup>(ı). Then the endpoint a or b of the interval ı is in the limit-point case if and only if for all {f,g}, {h, k} ∈ Tmax one has

$$\lim\_{t \to a} h(t)^\* Jf(t) = 0 \quad \text{or} \quad \lim\_{t \to b} h(t)^\* Jf(t) = 0,$$

respectively. Here f(t) and h(t) denote the values of the unique absolutely continuous representatives of f and h, respectively.

Proof. It suffices to consider the case that the endpoint a is regular. The proof of the case where b is regular is similar. As usual the fundamental matrix is fixed by Y (a, λ) = I.

Assume that b is in the limit-point case. In this case one has the decomposition (7.6.12) in Lemma 7.6.7. Let {f,g}, {h, k} ∈ Tmax be decomposed in the form

$$\{f, g\} = \{f\_0, g\_0\} + \mathcal{Y}\_a(\cdot, \lambda)\phi \quad \text{and} \quad \{h, k\} = \{h\_0, k\_0\} + \mathcal{Y}\_a(\cdot, \lambda)\psi,$$

where {f0, g0}, {h0, k0} ∈ <sup>T</sup>min and φ, ψ <sup>∈</sup> <sup>C</sup>2. Then it follows from (7.5.7) that

$$\lim\_{t \to b} h(t)^\* Jf(t) = \lim\_{t \to b} h\_0(t)^\* Jf\_0(t) = 0,$$

where Corollary 7.6.6 was used in the last step.

Conversely, assume that for all {f,g}, {h, k} ∈ Tmax

$$\lim\_{t \to b} h^\*(t) Jf(t) = 0. \tag{7.6.13}$$

Then b is in the limit-point case. To see this, assume that b is not in the limitpoint case, so that b is in the limit-circle case by Corollary 7.4.5. It then follows that for <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> the columns of the matrix function <sup>Y</sup>b(·, λ0) are square-integrable with respect to Δ at b. Consider {f,g} = {h, k} = Yb(·, λ0)φ ∈ Tmax for some <sup>φ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> such that <sup>φ</sup>∗Jφ = 0. Using (7.5.7) and <sup>Y</sup> (t, λ0)∗JY (t, λ0) (see (7.2.8)), one computes

$$\lim\_{t \to b^{-}} h^\*(t) J f(t) = \lim\_{t \to b^{-}} \phi^\* Y(t, \lambda\_0)^\* J Y(t, \lambda\_0) \phi = \phi^\* J \phi \neq 0,$$

which contradicts (7.6.13). -

**Corollary 7.6.9.** Let Tmin be the minimal symmetric relation associated with the real definite canonical system (7.2.3) in L<sup>2</sup> <sup>Δ</sup>(ı) and assume that both endpoints of ı are in the limit-point case. Then the defect numbers of Tmin are (0, 0) and Tmin = Tmax is self-adjoint in L<sup>2</sup> Δ(ı).

Proof. Assume that the system is definite on the interval [α, β] ⊂ ı. Then the system is also definite on the intervals (a, β- ) and (α- , b), with β- ∈ (β, b) and α- <sup>∈</sup> (a, α), respectively. Denote the maximal relation in <sup>L</sup><sup>2</sup> Δ(α- , b) associated with the canonical system by Tmax (α- , b). It follows that the endpoint α is regular and that the endpoint b is in the limit-point case for the canonical system on (α- , b). To see the last assertion, assume that there are two linearly independent solutions on (α- , b) which are square-integrable with respect to Δ at b. Since these solutions admit unique extensions to solutions on (a, b), one obtains a contradiction. In particular, for all {fb, gb}, {hb, kb} ∈ Tmax (α- , b) one concludes from Lemma 7.6.8 that

$$\lim\_{t \to b} h\_b(t)^\* J f\_b(t) = 0.$$

Now consider {f,g}, {h, k} ∈ Tmax and let fb, gb, hb, k<sup>b</sup> be the restrictions of f, g, h, k to the interval (α- , b). Then one has {fb, gb}, {hb, kb} ∈ Tmax (α- , b), and hence

$$\lim\_{t \to b} h(t)^\* Jf(t) = \lim\_{t \to b} h\_b(t)^\* Jf\_b(t) = 0.$$

A similar argument applies to the canonical system on (a, β- ) and shows that

$$\lim\_{t \to a} h(t)^\* Jf(t) = 0$$

for all {f,g}, {h, k} ∈ Tmax . Therefore, Lemma 7.6.4 implies

$$(g,h)\_{\Delta} - (f,k)\_{\Delta} = \lim\_{t \to b} h(t)^{\*} Jf(t) - \lim\_{t \to a} h(t)^{\*} Jf(t) = 0$$

for all {f,g}, {h, k} ∈ Tmax , and hence Tmax ⊂ T <sup>∗</sup> max . From Theorem 7.6.2 one now concludes Tmin = Tmax and thus it follows that the defect numbers of Tmin are (0, 0). -

## **7.7 Boundary triplets for the limit-circle case**

Assume that the system (7.2.3) is real and definite, and assume that the endpoints of the system are both in the limit-circle case. A boundary triplet will be presented for Tmax = (Tmin )<sup>∗</sup> and the self-adjoint extensions of Tmin will be described in terms of the boundary triplet. For a straightforward presentation the case where the endpoints are regular or quasiregular is discussed first. At the end of the section it will be explained what modifications are necessary for endpoints which are in the limit-circle case and which are not regular or quasiregular.

The symmetric relation Tmin = T<sup>0</sup> will now be described when a and b are regular or quasiregular.

**Lemma 7.7.1.** Assume that a and b are regular or quasiregular endpoints for the canonical system (7.2.3). Then the minimal relation Tmin is given by

$$T\_{\min} = \left\{ \{ f, g \} \in T\_{\max} \; : \; f(a) = f(b) = 0 \right\},$$

where f(a) and f(b) denote the boundary values of the unique absolutely continuous representatives of f.

Proof. According to Corollary 7.6.6, the element {f,g} ∈ Tmax belongs to Tmin if and only if

$$\lim\_{t \to a} h(t)^\* Jf(t) = 0 \quad \text{and} \quad \lim\_{t \to b} h(t)^\* Jf(t) = 0$$

for all {h, k} ∈ Tmax . Since the endpoints are regular or quasiregular, these conditions are the same as

$$h(a)^{\*}Jf(a) = 0 \quad \text{and} \quad h(b)^{\*}Jf(b) = 0$$

for all {h, k} ∈ <sup>T</sup>max . Now observe that for any <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> and <sup>k</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) there exists an element <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) such that {h, k} ∈ Tmax and h(a) = γ or h(b) = γ. Hence, it follows that f(a) = 0 and f(b) = 0. -

When the endpoints of the interval ı = (a, b) are regular or quasiregular for the canonical system, then the solutions of Jf-<sup>−</sup>Hf <sup>=</sup> <sup>λ</sup>Δf, <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, automatically belong to L<sup>2</sup> <sup>Δ</sup>(ı) and thus dim ker (Tmax − λ) = 2, so that the defect numbers of Tmin are (2, 2); cf. Corollary 7.6.3. In the next theorem a boundary triplet for (Tmin )<sup>∗</sup> = Tmax is provided and the corresponding γ-field and Weyl function are obtained in terms of an arbitrary fundamental matrix Y (·, λ) fixed by Y (c, λ) = I for some c ∈ [a, b].

**Theorem 7.7.2.** Assume that a and b are regular or quasiregular endpoints for the canonical system (7.2.3) and let the fundamental matrix Y (·, λ) of (7.2.4) be fixed by <sup>Y</sup> (c, λ) = <sup>I</sup> for some <sup>c</sup> <sup>∈</sup> [a, b]. Then {C2, <sup>Γ</sup>0, <sup>Γ</sup>1}, with

$$
\Gamma\_0\{f,g\} = \frac{1}{\sqrt{2}}(f(a) + f(b)) \quad \text{and} \quad \Gamma\_1\{f,g\} = -\frac{J}{\sqrt{2}}(f(a) - f(b)),
$$

where {f,g} ∈ Tmax , is a boundary triplet for (Tmin )<sup>∗</sup> = Tmax ; here f(a) and f(b) denote the boundary values of the unique absolutely continuous representative of f. The corresponding γ-field and Weyl function are given by

$$
\gamma(\lambda) = \sqrt{2}Y(\cdot,\lambda) \left( Y(a,\lambda) + Y(b,\lambda) \right)^{-1}, \quad \lambda \in \rho(A\_0),
$$

and

$$M(\lambda) = -J\left(Y(a,\lambda) - Y(b,\lambda)\right)\left(Y(a,\lambda) + Y(b,\lambda)\right)^{-1}, \quad \lambda \in \rho(A\_0).$$

Proof. Let {f,g}, {h, k} ∈ Tmax . Since the endpoints a and b are regular or quasiregular, one has the Lagrange identity

$$\begin{aligned} (g,h)\_\Delta - (f,k)\_\Delta &= \int\_a^b \left( h(s)^\* \Delta(s) g(s) - k(s)^\* \Delta(s) f(s) \right) ds \\ &= h(b)^\* Jf(b) - h(a)^\* Jf(a); \end{aligned}$$

cf. Corollary 7.3.4. On the other hand, a straightforward calculation shows that

$$\begin{aligned} &\left(\Gamma\_1\{f,g\},\Gamma\_0\{h,k\}\right)-\left(\Gamma\_0\{f,g\},\Gamma\_1\{h,k\}\right) \\ &= -\frac{1}{2}(h(a)+h(b))^\ast J\{f(a)-f(b)\}+\frac{1}{2}\left(h(a)-h(b)\right)^\ast J^\ast\{f(a)+f(b)\} \\ &= h(b)^\ast Jf(b)-h(a)^\ast Jf(a), \end{aligned}$$

and hence the boundary mappings Γ<sup>0</sup> and Γ<sup>1</sup> satisfy the abstract Green identity (2.1.1). Furthermore, the mapping (Γ0, <sup>Γ</sup>1) : <sup>T</sup>max <sup>→</sup> <sup>C</sup><sup>4</sup> is surjective. To see this, observe first that

$$
\begin{pmatrix}
\Gamma\_0 \{ f, g \} \\
\Gamma\_1 \{ f, g \} 
\end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} I & I \\ -J & J \end{pmatrix} \begin{pmatrix} f(a) \\ f(b) \end{pmatrix}, \quad \{f, g\} \in T\_{\text{max}},
$$

and that the 4 × 4-matrix on the right-hand side is invertible. Hence, it suffices to check that for any <sup>γ</sup>a, γ<sup>b</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> there exists {f,g} ∈ <sup>T</sup>max such that

$$
\begin{pmatrix} f(a) \\ f(b) \end{pmatrix} = \begin{pmatrix} \gamma\_a \\ \gamma\_b \end{pmatrix} . \tag{7.7.1}
$$

Choose a solution of the equation Jh- − Hh = 0 such that h(a) = γ<sup>a</sup> and modify h as in Proposition 7.5.6, so that it becomes a solution h<sup>a</sup> of an inhomogeneous equation Jh- <sup>a</sup> − Hh<sup>a</sup> = Δk<sup>a</sup> which coincides with h in a neighborhood of a and vanishes in a neighborhood of the endpoint b. Then one has {ha, ka} ∈ Tmax and ha(a) = γ<sup>a</sup> and ha(b) = 0. The same argument shows that there exists an element {hb, kb} ∈ Tmax such that hb(b) = γ<sup>b</sup> and hb(a) = 0. Thus, for f = h<sup>a</sup> + h<sup>b</sup> and g = k<sup>a</sup> + k<sup>b</sup> one has {f,g} ∈ Tmax and (7.7.1) holds. It follows that the mapping (Γ0, <sup>Γ</sup>1) : <sup>T</sup>max <sup>→</sup> <sup>C</sup><sup>4</sup> is surjective, as claimed. Therefore, {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} is a boundary triplet for (Tmin )<sup>∗</sup> = Tmax .

To obtain the expressions for the associated γ-field and Weyl function, let λ ∈ ρ(A0), where A<sup>0</sup> = ker Γ0, and note that

$$\mathfrak{R}\_{\lambda}(T\_{\max}) = \left\{ Y(\cdot,\lambda)\phi : \phi \in \mathbb{C}^2 \right\}, \qquad \lambda \in \mathbb{C}.$$

Hence, for f <sup>λ</sup> <sup>=</sup> {<sup>Y</sup> (·, λ)φ, λY (·, λ)φ}, <sup>φ</sup> <sup>∈</sup> <sup>C</sup>2, and <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) one has

$$
\Gamma\_0 \widehat{f}\_\lambda = \frac{1}{\sqrt{2}} \Big( Y(a,\lambda) + Y(b,\lambda) \big) \phi \quad \text{and} \quad \Gamma\_1 \widehat{f}\_\lambda = -\frac{J}{\sqrt{2}} \Big( Y(a,\lambda) - Y(b,\lambda) \big) \phi,
$$

which leads to

$$\gamma(\lambda) = \left\{ \left\{ \frac{1}{\sqrt{2}} \left( Y(a,\lambda) + Y(b,\lambda) \right) \phi, Y(\cdot,\lambda)\phi \right\} : \phi \in \mathbb{C}^2 \right\}$$

and

$$M(\lambda) = \left\{ \left\{ \frac{1}{\sqrt{2}} \Big( Y(a, \lambda) + Y(b, \lambda) \Big) \phi, -\frac{J}{\sqrt{2}} \Big( Y(a, \lambda) - Y(b, \lambda) \Big) \phi \right\} : \phi \in \mathbb{C}^2 \right\};$$

cf. Definition 2.3.1 and Definition 2.3.4. Now observe that for λ ∈ ρ(A0) the matrix Y (a, λ) + Y (b, λ) is invertible, as otherwise (Y (a, λ) + Y (b, λ))ψ = 0 for some nontrivial <sup>ψ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> would imply that <sup>λ</sup> is an eigenvalue of the self-adjoint relation A<sup>0</sup> = ker Γ<sup>0</sup> with corresponding eigenfunction Y (·, λ)ψ; a contradiction. Therefore, the formulas for the γ-field and the Weyl function follow from the above identities for γ(λ) and M(λ). -

Before formulating the next proposition some terminology is recalled. Let T be an integral operator of the form

$$Tf(t) = \int\_{a}^{b} K(t, s) f(s) \, ds,$$

where f is a C2-valued function and K is a C2×2-valued measurable matrix kernel. If K is square-integrable with respect to the Lebesgue measure on ı × ı, that is,

$$\int\_{a}^{b} \int\_{a}^{b} \|K(t,s)\|\_{2}^{2} \, ds \, dt < \infty,$$

where · <sup>2</sup> is the Hilbert–Schmidt matrix norm in (7.1.4), then T is a bounded linear operator from L2(ı) into itself, which belongs to the Hilbert–Schmidt class. Recall that a bounded linear operator from L2(ı) into itself belongs to the Hilbert– Schmidt class if for some, and hence for all orthonormal bases (ϕi) in L2(ı) one has

$$\sum\_{i,j} |(T\varphi\_i, \varphi\_j)|^2 < \infty.$$

**Proposition 7.7.3.** Assume that a and b are regular or quasiregular endpoints for the canonical system (7.2.3) and let {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>T</sup>max in Theorem 7.7.2 with corresponding Weyl function M. Let the fundamental matrix Y (·, λ) be fixed by Y (a, λ) = I. Then the self-adjoint relation A<sup>0</sup> = ker Γ<sup>0</sup> is given by

$$A\_0 = \ker \Gamma\_0 = \left\{ \{f, g\} \in T\_{\text{max}} \, : \, f(a) + f(b) = 0 \right\},$$

where f(a) and f(b) denote the boundary values of the unique absolutely continuous representative of f. The resolvent of A<sup>0</sup> is an integral operator

$$\left( (A\_0 - \lambda)^{-1} g \right)(t) = \int\_a^b G\_0(t, s, \lambda) \Delta(s) g(s) \, ds, \quad \lambda \in \rho(A\_0), \tag{7.7.2}$$

which belongs to the Hilbert–Schmidt class. The Green function G0(t, s, λ) is given by

$$G\_0(t, s, \lambda) = G\_{0, \mathbf{e}}(t, s, \lambda) + G\_{0, \mathbf{i}}(t, s, \lambda), \tag{7.7.3}$$

where the entire part G0,<sup>e</sup> is given by

$$\begin{split} G\_{0, \mathbf{c}}(t, s, \lambda) &= Y(t, \lambda) \left[ \frac{1}{2} J \operatorname{sgn}(s - t) \right] Y(s, \overline{\lambda})^\* \\ &= \frac{1}{2} \begin{cases} -Y(t, \lambda) J Y(s, \overline{\lambda})^\*, & s < t, \\ Y(t, \lambda) J Y(s, \overline{\lambda})^\*, & s > t, \end{cases} \end{split} \tag{7.7.4}$$

and

$$G\_{0,i}(t,s,\lambda) = Y(t,\lambda)\left[-\frac{1}{2}JM(\lambda)J\right]Y(s,\overline{\lambda})^\*.\tag{7.7.5}$$

Proof. Step 1. The resolvent of A<sup>0</sup> has the form (7.7.2) with G<sup>0</sup> as in (7.7.3). To see this, let <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0) and <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı), and define the function

$$f(t) = \int\_{a}^{b} G\_0(t, s, \lambda) \Delta(s) g(s) \, ds.$$

From the structure of the Green function in (7.7.3), (7.7.4), and (7.7.5), it follows that

$$\begin{aligned} f(t) &= \frac{1}{2} Y(t, \lambda) J \Big( - \int\_a^t Y(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds + \int\_t^b Y(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds \Big) \\ &+ Y(t, \lambda) E\_0(\lambda) \int\_a^b Y(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds, \end{aligned}$$

with <sup>E</sup>0(λ) = <sup>−</sup><sup>1</sup> <sup>2</sup> JM(λ)J. Hence, f is well defined and absolutely continuous. A straightforward computation together with (7.2.10) shows that

$$\begin{aligned} JY'(t) &= \frac{1}{2} JY'(t,\lambda) J \Big( - \int\_a^t Y(s,\overline{\lambda})^\* \Delta(s) g(s) \, ds + \int\_t^b Y(s,\overline{\lambda})^\* \Delta(s) g(s) \, ds \Big) \\ &+ \Delta(t) g(t) + JY'(t,\lambda) E\_0(\lambda) \int\_a^b Y(s,\overline{\lambda})^\* \Delta(s) g(s) \, ds. \end{aligned}$$

This implies that

$$Jf' - Hf = \lambda \Delta f + \Delta g = \Delta (g + \lambda f),$$

and thus one has {f,g + λf} ∈ Tmax . Furthermore, it is clear from the definition of f and Y (a, λ) = I that

$$f(a) = \left[\frac{1}{2}J + E\_0(\lambda)\right] \int\_a^b Y(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds$$

and

$$f(b) = Y(b, \lambda) \left[ -\frac{1}{2}J + E\_0(\lambda) \right] \int\_a^b Y(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds.$$

$$\lim\_{\lambda \to \infty} g(\lambda) \text{ y } \qquad \ln(I - X(1, \lambda)) (I + X(1, \lambda)) = 1 \text{ y } \qquad \square$$

Since <sup>E</sup>0(λ) = <sup>−</sup><sup>1</sup> <sup>2</sup> JM(λ)<sup>J</sup> <sup>=</sup> <sup>−</sup><sup>1</sup> <sup>2</sup> (<sup>I</sup> <sup>−</sup> <sup>Y</sup> (b, λ))(<sup>I</sup> <sup>+</sup> <sup>Y</sup> (b, λ))−1J, observe that

$$\begin{aligned} \frac{1}{2}J + E\_0(\lambda) &= Y(b,\lambda)(I + Y(b,\lambda))^{-1}J, \\ -\frac{1}{2}J + E\_0(\lambda) &= -(I + Y(b,\lambda))^{-1}J. \end{aligned}$$

Thus,

$$
\left[\frac{1}{2}J + E\_0(\lambda)\right] + Y(b, \lambda) \left[-\frac{1}{2}J + E\_0(\lambda)\right] = 0,
$$

and hence Γ0{f,g <sup>+</sup> λf} <sup>=</sup> <sup>√</sup><sup>1</sup> <sup>2</sup> (f(a) + f(b)) = 0, which implies {f,g + λf} ∈ A0. Therefore, <sup>f</sup> = (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup><sup>g</sup> and the resolvent of <sup>A</sup><sup>0</sup> is given by (7.7.2).

Step 2. The kernel GΔ(·, ·, λ) defined by

$$G\_{\Delta}(t, s, \lambda) = \Delta(t)^{\frac{1}{2}} G\_0(t, s, \lambda) \Delta(s)^{\frac{1}{2}},\tag{7.7.6}$$

where the kernel G0(·, ·, λ) is given in (7.7.3), satisfies

$$\int\_{a}^{b} \int\_{a}^{b} \left\| G\_{\Delta}(t, s, \lambda) \right\|\_{2}^{2} ds \, dt < \infty;\tag{7.7.7}$$

here · <sup>2</sup> is the Hilbert–Schmidt matrix norm. In fact, from (7.7.3), (7.7.4), (7.7.5), and (7.1.5) it follows that

$$\begin{aligned} \left\| \int\_a^b \int\_a^b \left\| G\_\Delta(t, s, \lambda) \right\|\_2^2 ds \, dt &= \int\_a^b \int\_a^b \left\| \Delta(t)^{\frac{1}{2}} G\_0(t, s, \lambda) \Delta(s)^{\frac{1}{2}} \right\|\_2^2 ds \, dt \\ &\le C \int\_a^b \int\_a^b \left\| \Delta(t)^{\frac{1}{2}} Y(t, \lambda) \right\|\_2^2 \left\| Y(s, \overline{\lambda})^\* \Delta(s)^{\frac{1}{2}} \right\|\_2^2 ds \, dt. \end{aligned}$$

To show that the right-hand side is finite, note that with Y (·,λ)=(Y1(·,λ)Y2(·,λ)) one has

$$\int\_{a}^{b} \left\| \Delta(t)^{\frac{1}{2}} Y(t, \lambda) \right\|\_{2}^{2} dt = \int\_{a}^{b} \left| \Delta(t)^{\frac{1}{2}} Y\_{1}(t, \lambda) \right|^{2} dt + \int\_{a}^{b} \left| \Delta(t)^{\frac{1}{2}} Y\_{2}(t, \lambda) \right|^{2} dt < \infty,$$

as the columns Y1(·, λ) and Y2(·, λ) of Y (·, λ) are square-integrable with respect to Δ. Due to the identity A <sup>2</sup> = A<sup>∗</sup> <sup>2</sup>, it follows that

$$\int\_a^b \left\| Y(s,\overline{\lambda})^\* \Delta(s)^{\frac{1}{2}} \right\|\_2^2 ds = \int\_a^b \left\| \Delta(s)^{\frac{1}{2}} Y(s,\lambda) \right\|\_2^2 dt < \infty.$$

Therefore, the kernel G<sup>Δ</sup> is square-integrable with respect to the Lebesgue measure on [a, b] × [a, b] and hence (7.7.7) holds. Consequently, the integral operator TΔ, defined by

$$T\_{\Delta}f(t) = \int\_{a}^{b} G\_{\Delta}(t, s, \lambda) f(s) \, ds, \quad f \in L^{2}(\iota), \tag{7.7.8}$$

belongs to the Hilbert–Schmidt class in L2(ı).

Step 3. The operator (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> belongs to the Hilbert–Schmidt class or, equivalently,

$$\sum\_{i,j} \left| \left( (A\_0 - \lambda)^{-1} u\_i, u\_j \right) \right|^2 < \infty, \quad \lambda \in \rho(A\_0), \tag{7.7.9}$$

for some, and hence for any orthonormal basis (ui) in L<sup>2</sup> <sup>Δ</sup>(ı). To see (7.7.9), observe that (Δ<sup>1</sup> <sup>2</sup> ui) is an orthonormal system in L<sup>2</sup>(ı) and that (7.7.2) and (7.7.6) give

$$\begin{aligned} \left( (A\_0 - \lambda)^{-1} u\_i, u\_j \right)\_{\Delta} &= \int\_a^b u\_j(t)^\* \Delta(t) \left( \int\_a^b G\_0(t, s, \lambda) \Delta(s) u\_i(s) \, ds \right) dt \\ &= \int\_a^b \int\_a^b \left( \Delta(t)^{\frac{1}{2}} u\_j(t) \right)^\* G\_\Delta(t, s, \lambda) \Big( \Delta(s)^{\frac{1}{2}} u\_i(s) \Big) \, ds \, dt \\ &= (T\_\Delta \, \Delta^{\frac{1}{2}} u\_i, \Delta^{\frac{1}{2}} u\_j)\_{L^2(\mathbb{I})}, \end{aligned}$$

where T<sup>Δ</sup> is the Hilbert–Schmidt operator (7.7.8) in L2(ı) whose kernel is given by (7.7.6). Hence, by Step 2 it follows that (7.7.9) holds, which implies that (A0−λ)−<sup>1</sup> is a Hilbert–Schmidt operator. -

Since the resolvent of the self-adjoint relation A<sup>0</sup> is a Hilbert–Schmidt operator, the spectrum of A<sup>0</sup> is discrete. As the minimal relation Tmin has no eigenvalues, the next statement follows immediately from Proposition 3.4.8.

**Theorem 7.7.4.** Let {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>T</sup>max as in Theorem 7.7.2. Then the operator part (Tmin )op is simple in L<sup>2</sup> <sup>Δ</sup>(ı) mul Tmin .

Theorem 7.7.4 together with the considerations in Section 3.5 and Section 3.6 ensure that the Weyl function M in Theorem 7.7.2 contains the complete spectral data of A0. In the present situation the eigenvalues of A<sup>0</sup> coincide with the poles of the Weyl function and the multiplicities of the eigenvalues of A<sup>0</sup> coincide with the multiplicities of the poles of M.

Let {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Theorem 7.7.2 with corresponding γ-field γ and Weyl function M. The self-adjoint (maximal dissipative, maximal accumulative) extensions A<sup>Θ</sup> ⊂ Tmax of Tmin are in a one-to-one correspondence to the self-adjoint (maximal dissipative, maximal accumulative) relations Θ in C<sup>2</sup> via

$$\begin{split} A\_{\Theta} &= \left\{ \{f, g\} \in T\_{\text{max}} : \left\{ \Gamma\_0 \{f, g\}, \Gamma\_1 \{f, g\} \right\} \in \Theta \right\} \\ &= \left\{ \{f, g\} \in T\_{\text{max}} : \left\{ f(a) + f(b), -Jf(a) + Jf(b) \right\} \in \Theta \right\}, \end{split} \tag{7.7.10}$$

where f(a) and f(b) denote the boundary values of the unique absolutely continuous representative of f. Recall from Theorem 2.6.1 that for λ ∈ ρ(AΘ) ∩ ρ(A0) the Kre˘ın formula for the corresponding resolvents reads

$$(A\_{\Theta} - \lambda)^{-1} = (A\_0 - \lambda)^{-1} + \gamma(\lambda) \left(\Theta - M(\lambda)\right)^{-1} \gamma(\overline{\lambda})^\*. \tag{7.7.11}$$

Assume in the following that Θ is a self-adjoint relation in C2. Since the spectrum of A<sup>0</sup> is discrete and the difference of the resolvents of A<sup>0</sup> and A<sup>Θ</sup> is an operator of rank ≤ 2, it is clear that the spectrum of the self-adjoint relation

A<sup>Θ</sup> is also discrete. Note that λ ∈ ρ(A0) is an eigenvalue of A<sup>Θ</sup> if and only if ker (Θ − M(λ)) is nontrivial, and that

$$\ker\left(A\_{\Theta} - \lambda\right) = \gamma(\lambda)\ker\left(\Theta - M(\lambda)\right).$$

For the self-adjoint relation Θ one may use a parametric representation with the help of 2 × 2 matrices A and B as in Section 1.10 and give a complete description of the (discrete) spectrum of A<sup>Θ</sup> via poles of a transform of the Weyl function M; cf. Section 3.8 and Section 6.3.

In the following paragraph and corollary it is assumed for simplicity that the relation Θ in (7.7.10) is a self-adjoint 2 × 2 matrix. In this case the self-adjoint relation A<sup>Θ</sup> in (7.7.10) is given by

$$A\_{\Theta} = \left\{ \{f, g\} \in T\_{\text{max}} : \Theta(f(a) + f(b)) = -Jf(a) + Jf(b) \right\} \tag{7.7.12}$$

and according to Section 3.8 the spectral properties of A<sup>Θ</sup> can also be described with the help of the function

$$
\lambda \mapsto \left(\Theta - M(\lambda)\right)^{-1};\tag{7.7.13}
$$

that is, the poles of the matrix function (7.7.13) coincide with the (discrete) spectrum of A<sup>Θ</sup> and the dimension of the eigenspace ker (A<sup>Θ</sup> − λ) coincides with the dimension of the range of the residue of the function in (7.7.13) at λ. Now fix a fundamental matrix Y (·, λ) by Y (a, λ) = I as in Proposition 7.7.3. By Proposition 7.7.3, the resolvent (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> in the Kre˘ın formula (7.7.11) is an integral operator. Since I + JM(λ) = 2(I + Y (b, λ))−1, the γ-field and Weyl function in Theorem 7.7.2 are connected in the present situation via

$$\gamma(\cdot,\lambda) = \frac{1}{\sqrt{2}} Y(\cdot,\lambda)(I + JM(\lambda)), \quad \lambda \in \mathbb{C} \nmid \mathbb{R}.$$

One verifies that

$$\gamma(\overline{\lambda})^\* g = \frac{1}{\sqrt{2}} (I - M(\lambda)J) \int\_a^b Y(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds, \quad g \in L^2\_\Delta(\mathfrak{u}),$$

and this implies that the second term on the right-hand side of (7.7.11) applied to <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) can be written as

$$\frac{1}{2}Y(\cdot,\lambda)(I+JM(\lambda))\left(\Theta-M(\lambda)\right)^{-1}(I-M(\lambda)J)\int\_{a}^{b}Y(s,\overline{\lambda})^{\*}\Delta(s)g(s)\,ds.$$

Combining this expression with the entire part G0,<sup>e</sup> in (7.7.4) and the part G0,<sup>i</sup> in (7.7.5) for (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1, one sees that the resolvent of the self-adjoint extension A<sup>Θ</sup> is an integral operator in L<sup>2</sup> <sup>Δ</sup>(ı) of the form

$$\left( (A\_{\Theta} - \lambda)^{-1} g \right)(t) = \int\_{a}^{b} G\_{\Theta}(t, s, \lambda) \Delta(s) g(s) \, ds, \quad \lambda \in \rho(A\_{\Theta}) \cap \rho(A\_0), \tag{7.7.14}$$

where <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). The Green function GΘ(t, s, λ) in (7.7.14) is given by

$$G\_{\Theta}(t, s, \lambda) = Y(t, \lambda) \left[ \frac{1}{2} J \operatorname{sgn}(s - t) + E\_{\Theta}(\lambda) \right] Y(s, \overline{\lambda})^\*,\tag{7.7.15}$$

where

$$E\_{\Theta}(\lambda) = -\frac{1}{2}J\left[M(\lambda) + (M(\lambda) - J)\left(\Theta - M(\lambda)\right)^{-1}(M(\lambda) + J)\right]J. \tag{7.7.16}$$

In the next corollary the Green function in (7.7.14) is further decomposed in the case that the self-adjoint relation Θ in C<sup>2</sup> is a self-adjoint matrix.

**Corollary 7.7.5.** Let a and b be regular or quasiregular endpoints for the canonical system (7.2.3) and let {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>T</sup>max in Theorem 7.7.2 with corresponding Weyl function M. Assume that the fundamental matrix <sup>Y</sup> (·, λ) is fixed by <sup>Y</sup> (a, λ) = <sup>I</sup>, let <sup>Θ</sup> be a self-adjoint matrix in <sup>C</sup>2, and let A<sup>Θ</sup> be the self-adjoint extension in (7.7.12). Then the Green function GΘ(t, s, λ) in (7.7.14) has the decomposition

$$G\_{\Theta}(t, s, \lambda) = G\_{\Theta, \mathbf{e}}(t, s, \lambda) + G\_{\Theta, \mathbf{i}}(t, s, \lambda),$$

where the entire part GΘ,<sup>e</sup> is given by

$$G\_{\Theta, \mathbf{c}}(t, s, \lambda) = Y(t, \lambda) \left[ \frac{1}{2} J \operatorname{sgn}(s - t) + \frac{1}{2} J \Theta J \right] Y(s, \overline{\lambda})^\*,$$

and

$$G\_{\Theta, \mathbf{i}}(t, s, \lambda) = Y\_{\Theta}(t, \lambda) \left[ \frac{1}{2} (\Theta - M(\lambda))^{-1} \right] Y\_{\Theta}(s, \overline{\lambda})^\*,$$

where YΘ(t, λ) = Y (t, λ)(I + JΘ).

Proof. Since Θ is a self-adjoint 2 × 2-matrix, one sees that

$$\begin{aligned} &\left(M(\lambda) - J\right) \left(\Theta - M(\lambda)\right)^{-1} (M(\lambda) + J) \\ &= -M(\lambda) - \Theta + \left(\Theta - J\right) \left(\Theta - M(\lambda)\right)^{-1} (\Theta + J) .\end{aligned}$$

Therefore, EΘ(λ) in (7.7.16) has the form

$$\begin{aligned} E\_{\Theta}(\lambda) &= -\frac{1}{2}J \left[ -\Theta + (\Theta - J) \left( \Theta - M(\lambda) \right)^{-1} (\Theta + J) \right] J \\ &= \frac{1}{2}J\Theta J + (I + J\Theta) \left[ \frac{1}{2} \left( \Theta - M(\lambda) \right)^{-1} \right] (I - \Theta J) J \end{aligned}$$

The assertion now follows from this identity combined with (7.7.15). -

At the end of this section the assumption is that the endpoints a and b are in the general limit-circle case, so that the assumption that a and b are regular or quasiregular is abandoned. The transformation in Lemma 7.2.5 will be useful as for any <sup>λ</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> the solution matrix <sup>U</sup>(·, λ0) is now square-integrable with respect

$$\mathbb{D}$$

to Δ. This implies that the transformed equation (7.2.20) is in the quasiregular case at a and b. Then most of the above results remain true once the (limit) values f(a) and f(b) are replaced by the limits in (7.4.6). The next proposition is the counterpart of Theorem 7.7.2.

**Proposition 7.7.6.** Assume that a and b are in the limit-circle case and let Y (·, λ) be a fundamental matrix. Let <sup>λ</sup><sup>0</sup> in <sup>R</sup>, let <sup>U</sup>(·, λ0) be a solution matrix as in Lemma 7.2.5, and consider the limits

$$\tilde{f}(a) = \lim\_{t \to a} U(t, \lambda\_0)^{-1} f(t) \quad \text{and} \quad \tilde{f}(b) = \lim\_{t \to b} U(t, \lambda\_0)^{-1} f(t)$$

for {f,g} ∈ <sup>T</sup>max ; cf. Corollary 7.4.8. Then {C2, <sup>Γ</sup>0, <sup>Γ</sup>1}, with

$$
\Gamma\_0\{f,g\} = \frac{1}{\sqrt{2}}(\tilde{f}(a) + \tilde{f}(b)) \quad \text{and} \quad \Gamma\_1\{f,g\} = -\frac{J}{\sqrt{2}}(\tilde{f}(a) - \tilde{f}(b)),
$$

where {f,g} ∈ Tmax , is a boundary triplet for (Tmin )<sup>∗</sup> = Tmax . The corresponding γ-field and Weyl function are given by

$$
\gamma(\lambda) = \sqrt{2}Y(\cdot,\lambda) \left( \tilde{Y}(a,\lambda) + \tilde{Y}(b,\lambda) \right)^{-1}, \quad \lambda \in \rho(A\_0),
$$

and

$$M(\lambda) = -J(\tilde{Y}(a,\lambda) - \tilde{Y}(b,\lambda)) \left( \tilde{Y}(a,\lambda) + \tilde{Y}(b,\lambda) \right)^{-1}, \quad \lambda \in \rho(A\_0),$$

where <sup>Y</sup>(·, λ)<sup>φ</sup> <sup>=</sup> <sup>U</sup>(·, λ0)−1<sup>Y</sup> (·, λ)<sup>φ</sup> for <sup>φ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> and <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A0).

Proof. Recall that due to Corollary 7.4.9 the Lagrange formula takes the form

$$
\tilde{h}(b)^\* J \tilde{f}(b) - \tilde{h}(a)^\* J \tilde{f}(a) = \int\_a^b \left( h(s)^\* \Delta(s) g(s) - k(s)^\* \Delta(s) f(s) \right) ds
$$

for {f,g}, {h, k} ∈ Tmax . Now the same computation as in the proof of Theorem 7.7.2 shows that the abstract Green identity (2.1.1) is satisfied. The surjectivity of the map (Γ0, <sup>Γ</sup>1) : <sup>T</sup>max <sup>→</sup> <sup>C</sup>4, and the form of the <sup>γ</sup>-field and Weyl function follow in the same way as in the proof of Theorem 7.7.2. -

## **7.8 Boundary triplets for the limit-point case**

Assume that the system (7.2.3) is real and definite, and assume that the endpoint a is in the limit-circle case and the endpoint b is in the limit-point case. A boundary triplet will be presented for Tmax = (Tmin )<sup>∗</sup> and will be used to describe the selfadjoint extensions of Tmin . To make the presentation straightforward, the case where the endpoint a is regular or quasiregular is dealt with first. At the end of the section it will be explained what modifications are necessary if the endpoint a is in the limit-circle case.

The symmetric relation Tmin = T<sup>0</sup> will now be described when a is a regular or quasiregular endpoint and b is in the limit-point case.

**Lemma 7.8.1.** Assume that the endpoint a is regular or quasiregular and the endpoint b is in the limit-point case. Then the minimal relation Tmin is given by

$$T\_{\min} = \left\{ \{ f, g \} \in T\_{\max} \, : \, f(a) = 0 \right\},$$

where f(a) denotes the boundary value of the unique absolutely continuous representative of f.

Proof. According to Corollary 7.6.6 and Lemma 7.6.8, an element {f,g} ∈ Tmax belongs to Tmin if and only if

$$\lim\_{t \to a} h(t)^\* Jf(t) = 0$$

for all {h, k} ∈ Tmax . Since the endpoint a is regular or quasiregular this condition is the same as

$$h(a)^{\*}Jf(a) = 0$$

for all {h, k} ∈ <sup>T</sup>max . Now observe that for any <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> there exists {h, k} ∈ <sup>T</sup>max such that h(a) = γ. In fact, choose a solution Ju- −Hu = 0 such that u(a) = γ and use Proposition 7.5.6 to modify <sup>u</sup> to a function <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) which coincides with u in a neighborhood of a, vanishes in a neighborhood of b, and satisfies Jh- −Hh = Δk with some <sup>k</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı), that is, {h, k} ∈ <sup>T</sup>max . Since <sup>γ</sup>∗Jf(a) = 0 for all <sup>γ</sup> <sup>∈</sup> <sup>C</sup>2, it follows that f(a) = 0. -

Let the endpoint a be regular or quasiregular and let b be in the limit-point case. Then there exists for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, up to scalar multiples, one nontrivial solution of Jf- −Hf = λΔf, which is square-integrable with respect to Δ at <sup>b</sup> and thus dim ker (Tmax <sup>−</sup> <sup>λ</sup>) = 1 for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. This implies that the defect numbers are (1, 1); cf. Corollary 7.6.3. In the next theorem a boundary triplet is provided in this case. To avoid confusion, recall that Y1(·, λ) and Y2(·, λ) are the columns of a fundamental matrix Y (·, λ), whereas f<sup>1</sup> and f<sup>2</sup> stand for the components of the 2 × 1 vector function f.

**Theorem 7.8.2.** Assume that the endpoint a is regular or quasiregular and that the endpoint b is in the limit-point case. Let Y (·, λ) be a fundamental matrix fixed by <sup>Y</sup> (a, λ) = <sup>I</sup>. Then {C, <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0\{f,g\} = f\_1(a) \quad \text{and} \quad \Gamma\_1\{f,g\} = f\_2(a), \quad \{f,g\} \in T\_{\text{max}},
$$

is a boundary triplet for (Tmin )<sup>∗</sup> = Tmax ; here f1(a) and f2(a) denote the boundary values of the components of the unique absolutely continuous representative of f. Moreover, if <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>χ</sup>(·, λ) is a nontrivial element in <sup>N</sup>λ(Tmax ), then one has <sup>χ</sup>1(a, λ) = 0. For all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the corresponding <sup>γ</sup>-field and Weyl function are given by

$$
\gamma(\cdot,\lambda) = Y\_1(\cdot,\lambda) + M(\lambda)Y\_2(\cdot,\lambda) \quad \text{and} \quad M(\lambda) = \frac{\chi\_2(a,\lambda)}{\chi\_1(a,\lambda)}.
$$

Proof. Since the endpoint a is assumed to be regular or quasiregular, the elements {f,g}, {h, k} ∈ <sup>T</sup>max have boundary values <sup>f</sup>(a), h(a) <sup>∈</sup> <sup>C</sup><sup>2</sup>. Due to Lemma 7.6.8, the Lagrange identity in Corollary 7.3.4 takes the form

$$\begin{aligned} (g,h)\_\Delta - (f,k)\_\Delta &= \int\_a^b \left( h(s)^\* \Delta(s) g(s) - k(s)^\* \Delta(s) f(s) \right) ds \\ &= -h(a)^\* Jf(a) \\ &= f\_2(a) \overline{h\_1(a)} - f\_1(a) \overline{h\_2(a)} \\ &= \left( \Gamma\_1 \{ f,g \}, \Gamma\_0 \{ h,k \} \right) - \left( \Gamma\_0 \{ f,g \}, \Gamma\_1 \{ h,k \} \right). \end{aligned}$$

Hence, the boundary mappings Γ<sup>0</sup> and Γ<sup>1</sup> satisfy the abstract Green identity (2.1.1). In the proof of Lemma 7.8.1 it was shown that for <sup>γ</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> there exists {h, k} ∈ <sup>T</sup>max such that <sup>h</sup>(a) = <sup>γ</sup>, and so the mapping (Γ0, <sup>Γ</sup>1) : <sup>T</sup>max <sup>→</sup> <sup>C</sup><sup>2</sup> is surjective. It follows that {C, <sup>Γ</sup>0, <sup>Γ</sup>1} is a boundary triplet for (Tmin )<sup>∗</sup> <sup>=</sup> <sup>T</sup>max .

Due to the assumption that the endpoint b is in the limit-point case, each eigenspace <sup>N</sup>λ(Tmax ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, has dimension 1. Hence, <sup>f</sup> <sup>λ</sup> <sup>∈</sup> <sup>N</sup> <sup>λ</sup>(Tmax ) has the form f <sup>λ</sup> <sup>=</sup> {χ(·, λ)c, λχ(·, λ)c} for some <sup>c</sup> <sup>∈</sup> <sup>C</sup>, where <sup>χ</sup>(·, λ) is a nontrivial element in <sup>N</sup>λ(Tmax ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. It follows from Definition 2.3.1 and Definition 2.3.4 that

$$\gamma(\lambda) = \left\{ \{ \chi\_1(a,\lambda)c, \chi(\cdot,\lambda)c \} : c \in \mathbb{C} \right\}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

and

$$M(\lambda) = \left\{ \{ \chi\_1(a, \lambda)c, \chi\_2(a, \lambda)c \} : c \in \mathbb{C} \right\}, \quad \lambda \in \mathbb{C} \mid \mathbb{R}.$$

Observe that <sup>χ</sup>1(a, λ) = 0 for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, as otherwise <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> would be an eigenvalue of the self-adjoint relation A<sup>0</sup> = ker Γ<sup>0</sup> and χ(·, λ) would be a corresponding eigenfunction. Thus, one concludes that

$$\gamma(\lambda) = \frac{\chi(\cdot,\lambda)}{\chi\_1(a,\lambda)} \quad \text{and} \quad M(\lambda) = \frac{\chi\_2(a,\lambda)}{\chi\_1(a,\lambda)}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Note that <sup>χ</sup>(·, λ) = <sup>α</sup>1Y1(·, λ) + <sup>α</sup>2Y2(·, λ) for some <sup>α</sup>1, α<sup>2</sup> <sup>∈</sup> <sup>C</sup> and that the assumption Y (a, λ) = I yields

$$
\begin{pmatrix} \chi\_1(a,\lambda) \\ \chi\_2(a,\lambda) \end{pmatrix} = \chi(a,\lambda) = \begin{pmatrix} \alpha\_1 \\ \alpha\_2 \end{pmatrix}.
$$

This implies

$$\gamma(\lambda) = \frac{\alpha\_1 Y\_1(\cdot, \lambda) + \alpha\_2 Y\_2(\cdot, \lambda)}{\chi\_1(a, \lambda)} = Y\_1(\cdot, \lambda) + M(\lambda) Y\_2(\cdot, \lambda), \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

establishing the formulas for the γ-field and Weyl function. -

Note that the γ-field and Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} in Theorem 7.8.2 are defined and analytic on the resolvent set of the self-adjoint relation A<sup>0</sup> = ker Γ0. It follows in the same way as in Section 6.4 (see the discussion after the proof of Proposition 6.4.1) that the expressions for γ and <sup>M</sup> in Theorem 7.8.2 extend to points in <sup>ρ</sup>(A0) <sup>∩</sup> <sup>R</sup>.

In the next proposition the resolvent of the self-adjoint relation A<sup>0</sup> is expressed as an integral operator.

**Proposition 7.8.3.** Assume that the endpoint a is regular or quasiregular and that the endpoint b is in the limit-point case. Let Y (·, λ) be a fundamental matrix fixed by <sup>Y</sup> (a, λ) = <sup>I</sup>. Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>T</sup>max in Theorem 7.8.2 with corresponding Weyl function M. Then the self-adjoint relation A<sup>0</sup> = ker Γ<sup>0</sup> is given by

$$A\_0 = \ker \Gamma\_0 = \left\{ \{f, g\} \in T\_{\text{max}} \, : \, f\_1(a) = 0 \right\},$$

where f1(a) denotes the boundary value of the first component of the unique absolutely continuous representative of f. The resolvent of A<sup>0</sup> is an integral operator

$$\left( (A\_0 - \lambda)^{-1} g \right)(t) = \int\_a^b G\_0(t, s, \lambda) \Delta(s) g(s) \, ds, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \tag{7.8.1}$$

where <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). The Green function G0(t, s, λ) is given by

$$G\_0(t, s, \lambda) = G\_{0, \mathbf{e}}(t, s, \lambda) + G\_{0, \mathbf{i}}(t, s, \lambda), \tag{7.8.2}$$

where the entire part G0,<sup>e</sup> is given by

$$G\_{0, \mathbf{e}}(t, s, \lambda) = \begin{cases} Y\_1(t, \lambda) Y\_2(s, \overline{\lambda})^\*, & s < t, \\ Y\_2(t, \lambda) Y\_1(s, \overline{\lambda})^\*, & s > t, \end{cases} \tag{7.8.3}$$

and

$$G\_{0, \mathbf{i}}(t, s, \lambda) = Y\_2(t, \lambda) M(\lambda) Y\_2(s, \overline{\lambda})^\*. \tag{7.8.4}$$

Proof. To prove the identity (7.8.1), consider <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) and define the function f by the right-hand side of (7.8.1) with G<sup>0</sup> as in (7.8.2). In view of (7.8.3) and (7.8.4), this means that

$$\begin{split} f(t) &= \left( Y\_1(t,\lambda) + Y\_2(t,\lambda)M(\lambda) \right) \int\_a^t Y\_2(s,\overline{\lambda})^\* \Delta(s) g(s) \, ds \\ &\quad + Y\_2(t,\lambda) \int\_t^b \left( Y\_1(s,\overline{\lambda})^\* + M(\overline{\lambda})^\* Y\_2(s,\overline{\lambda})^\* \right) \Delta(s) g(s) \, ds. \end{split} \tag{7.8.5}$$

Observe that, indeed, the integral near b exists, since one has

$$\gamma(\cdot,\overline{\lambda}) = Y\_1(\cdot,\overline{\lambda}) + M(\overline{\lambda})Y\_2(\cdot,\overline{\lambda}) \in L^2\_{\Delta}(\imath)$$

and <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). It follows that the function f in (7.8.5) is well defined and absolutely continuous. Rewrite (7.8.5) in the form

$$\begin{split} f(t) &= Y(t,\lambda) \begin{pmatrix} 1 \\ M(\lambda) \end{pmatrix} \int\_{a}^{t} \begin{pmatrix} 0 & 1 \end{pmatrix} Y(s,\overline{\lambda})^{\*} \Delta(s) g(s) \, ds \\ &+ Y(t,\lambda) \begin{pmatrix} 0 \\ 1 \end{pmatrix} \int\_{t}^{b} \begin{pmatrix} 1 & M(\overline{\lambda})^{\*} \end{pmatrix} Y(s,\overline{\lambda})^{\*} \Delta(s) g(s) \, ds. \end{split} \tag{7.8.6}$$

Then a straightforward calculation using the identity

$$
\begin{pmatrix} 1 \\ M(\lambda) \end{pmatrix} \begin{pmatrix} 0 & 1 \end{pmatrix} - \begin{pmatrix} 0 \\ 1 \end{pmatrix} \begin{pmatrix} 1 & M(\overline{\lambda})^\* \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} = -J \tag{7.8.7}
$$

and (7.2.10) shows that f satisfies the inhomogenenous equation

$$Jf' - Hf = \lambda \Delta f + \Delta g.\tag{7.8.8}$$

Moreover, one sees from (7.8.5) that f satisfies

$$f(a) = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \int\_a^b \left( Y\_1(s, \overline{\lambda}) + Y\_2(s, \overline{\lambda}) M(\overline{\lambda}) \right)^\* \Delta(s) g(s) \, ds = \begin{pmatrix} 0 \\ (g, \gamma(\overline{\lambda}))\_\Delta \end{pmatrix}.$$

Now denote the function on the left-hand side of (7.8.1) by <sup>h</sup> = (A0−λ)−1g. Then it is clear that

$$\{h, \lambda h + g\} = \left\{ (A\_0 - \lambda)^{-1} g, g + \lambda (A\_0 - \lambda)^{-1} g \right\} \in A\_0 \subset T\_{\text{max}}\,,\tag{7.8.9}$$

so that h also satisfies (7.8.8). Moreover, by (7.8.9) and Proposition 2.3.2 one obtains

$$\begin{aligned} h\_1(a) &= \Gamma\_0 \{ h, \lambda h + g \} = 0, \\ h\_2(a) &= \Gamma\_1 \{ h, \lambda h + g \} = \gamma(\overline{\lambda})^\* g = (g, \gamma(\overline{\lambda}))\_{\Delta}. \end{aligned}$$

Thus, for a fixed <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the functions <sup>h</sup> and <sup>f</sup> satisfy the same inhomogeneous equation (7.8.8) and they have the same initial value f(a) = h(a). Since the solution is unique, <sup>h</sup> <sup>=</sup> <sup>f</sup>. One concludes that (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>g</sup> is given by the righthand side of (7.8.1). -

Note that there exist canonical systems whose Weyl functions are of the form <sup>M</sup>(λ) = <sup>α</sup> <sup>+</sup> βλ where <sup>α</sup> <sup>∈</sup> <sup>R</sup> and <sup>β</sup> <sup>≥</sup> 0; cf. Example 7.10.4 for a special case. Hence, the functions M and G0,<sup>i</sup> in (7.8.4) may be entire.

**Theorem 7.8.4.** Assume that the endpoint a is regular or quasiregular and that the endpoint b is in the limit-point case. Then the operator part (Tmin )op is simple in the Hilbert space L<sup>2</sup> <sup>Δ</sup>(ı) mul Tmin .

Proof. Step 1. Let <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) and define <sup>f</sup> = (A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup>g. Then it is clear that {f,g + λf} ∈ A<sup>0</sup> and one has

$$f(t) = -Y(t, \lambda) J \int\_{a}^{t} Y(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds + Y(t, \lambda) \begin{pmatrix} 0\\ (g, \gamma(\overline{\lambda}))\_{\Delta} \end{pmatrix},\tag{7.8.10}$$

where Y (·, λ) is the fundamental matrix fixed by Y (a, λ) = I. In fact, it follows from Proposition 7.8.3 and its proof that f is given by (7.8.1) or, equivalently, by (7.8.6). Now on the right-hand side of (7.8.6) substract and add the term

$$Y(t,\lambda)\begin{pmatrix}0\\1\end{pmatrix}\int\_{a}^{t}\begin{pmatrix}1&M(\overline{\lambda})^{\*}\end{pmatrix}Y(s,\overline{\lambda})^{\*}\Delta(s)g(s)\,ds,$$

and use (7.8.7) and γ(·, λ) = Y1(·, λ) + M(λ)Y2(·, λ). This yields (7.8.10).

Step 2. The multivalued part mul Tmin is given by

$$\left(\text{span}\left\{\gamma(\lambda) : \lambda \in \mathbb{C} \mid \mathbb{R}\right\}\right)^{\perp} = \text{mult}\,T\_{\text{min}}\,,\tag{7.8.11}$$

which, in view of Corollary 3.4.6, is equivalent to (Tmin )op being simple in the Hilbert space L<sup>2</sup> <sup>Δ</sup>(ı) mul Tmin .

The identity (7.8.11) will be verified by exhibiting the corresponding inclusions. For the inclusion (⊃) in (7.8.11), let g ∈ mul Tmin . Since {0, g} ∈ Tmin and {γ(λ), λγ(λ)} ∈ <sup>T</sup>max for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one sees that

$$(g, \gamma(\lambda))\_{\Delta} = (g, \gamma(\lambda))\_{\Delta} - (0, \lambda \gamma(\lambda))\_{\Delta} = 0.$$

Hence, <sup>g</sup> <sup>∈</sup> mul <sup>T</sup>min is orthogonal to all <sup>γ</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

For the inclusion (⊂) in (7.8.11), assume that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) is orthogonal to all <sup>γ</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then it follows from (7.8.10) that

$$f(t) = \left( (A\_0 - \lambda)^{-1} g \right)(t) = -Y(t, \lambda) J \int\_a^t Y(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds. \tag{7.8.12}$$

Clearly, f(a) = 0, so that, in fact,

$$\{f, g + \lambda f\} \in T\_{\text{min}}\,. \tag{7.8.13}$$

Let <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> Δ(ı) have compact support, say in [a- , b- ] ⊂ ı. Then it follows from (7.8.12) that

$$\left( (A\_0 - \lambda)^{-1} g, h \right)\_{\Delta} = - \int\_a^b \left( \int\_a^t h(t)^\* \Delta(t) Y(t, \lambda) JY(s, \overline{\lambda})^\* \Delta(s) g(s) \, ds \right) dt$$

and due to the structure of the double integral the integration takes place only on the square [a- , b- ] × [a- , b- ].

Now consider a bounded interval <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> such that the endpoints of <sup>δ</sup> are not eigenvalues of A0. Then the spectral projection of A<sup>0</sup> corresponding to the interval δ is given by Stone's formula (1.5.7) (see also Example A.1.4),

$$(E(\delta)g, h)\_{\Delta} = \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \left( \left( \left( A\_0 - (\mu + i\varepsilon) \right)^{-1} - \left( A\_0 - (\mu - i\varepsilon) \right)^{-1} \right) g, h \right)\_{\Delta} d\mu.$$

Making use of the above integral for ((A<sup>0</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup>g, h)<sup>Δ</sup> one has that

$$\begin{aligned} (E(\delta)g,h)\_{\Delta} &= \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \int\_{a}^{b} \int\_{a}^{t} h(t)^{\*} \Delta(t) F\_{\varepsilon}(t,s,\mu) \Delta(s) g(s) \, ds \, dt \, d\mu \\ &= \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{a}^{b} \int\_{a}^{t} h(t)^{\*} \Delta(t) \left( \int\_{\delta} F\_{\varepsilon}(t,s,\mu) \, d\mu \right) \Delta(s) g(s) \, ds \, dt, \end{aligned}$$

where

$$F\_{\varepsilon}(t, s, \mu) = Y(t, \mu - i\varepsilon)JY(s, \overline{\mu - i\varepsilon})^\* - Y(t, \mu + i\varepsilon)JY(s, \overline{\mu + i\varepsilon})^\*.$$

To justify the application of Fubini's theorem above note that each of the functions

$$s \mapsto \Delta(s)g(s) \quad \text{and} \quad t \mapsto \Delta(t)h(t).$$

is integrable on [a- , b- ], due to g, h <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) and Lemma 7.1.4, and that the function

$$Y(s, t, \lambda) \mapsto Y(t, \lambda)JY(s, \overline{\lambda})^\* - Y(t, \overline{\lambda})JY(s, \lambda)^\*, \quad s, t \in [a', b'], \ \lambda \in K, t$$

where <sup>K</sup> <sup>⊂</sup> <sup>C</sup> is some compact set, is continuous and hence bounded on the set [a- , b- ] × [a- , b- ] × K. Since the mapping λ → Y (t, λ) is entire, it follows that

$$\lim\_{\varepsilon \downarrow 0} \int\_{\delta} F\_{\varepsilon}(t, s, \mu) \, d\mu = 0$$

and dominated convergence implies that (E(δ)g, h)<sup>Δ</sup> = 0 for any <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) with compact support. Therefore, E(δ)g = 0 for any bounded interval δ with endpoints not in <sup>σ</sup>p(A0). With <sup>δ</sup> <sup>→</sup> <sup>R</sup> one concludes <sup>E</sup>(R)<sup>g</sup> = 0 and this implies <sup>g</sup> <sup>∈</sup> mul <sup>A</sup>0. Since {f,g + λf} ∈ A<sup>0</sup> and {0, g} ∈ A0, it follows that {f, λf} ∈ A<sup>0</sup> and hence <sup>f</sup> = 0, as <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> is not an eigenvalue of <sup>A</sup>0. Since {f,g+λf} ∈ <sup>T</sup>min by (7.8.13), one concludes {0, g} ∈ Tmin, that is, g ∈ mul Tmin . This shows the inclusion (⊂) in (7.8.11). -

Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Theorem 7.8.2. Then the selfadjoint extensions of Tmin are in a one-to-one correspondence to the numbers <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞} via

$$A\_{\tau} = \{ \{ f, g \} \in T\_{\text{max}} \, : \, \Gamma\_1 \{ f, g \} = \tau \Gamma\_0 \{ f, g \} \}. \tag{7.8.14}$$

Note that <sup>A</sup><sup>0</sup> corresponds to <sup>τ</sup> <sup>=</sup> <sup>∞</sup>. For a given <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞} one can transform the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} as follows:

$$
\begin{pmatrix} \Gamma\_0^\tau \\ \Gamma\_1^\tau \end{pmatrix} = \frac{1}{\sqrt{\tau^2 + 1}} \begin{pmatrix} \tau & -1 \\ 1 & \tau \end{pmatrix} \begin{pmatrix} \Gamma\_0 \\ \Gamma\_1 \end{pmatrix} \tag{7.8.15}
$$

(see (2.5.19)), so that A<sup>τ</sup> = ker Γ<sup>τ</sup> <sup>0</sup> . Then, by (2.5.20), the γ-field and Weyl function corresponding to the new boundary triplet are given by

$$M\_{\tau}(\lambda) = \frac{1 + \tau M(\lambda)}{\tau - M(\lambda)} \quad \text{and} \quad \gamma\_{\tau}(\lambda) = \frac{\sqrt{\tau^2 + 1}}{\tau - M(\lambda)} \gamma(\lambda), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{7.8.16}$$

The Weyl function M<sup>τ</sup> and the γ-field γ<sup>τ</sup> are connected by

$$\frac{M\_{\tau}(\lambda) - M\_{\tau}(\mu)^{\*}}{\lambda - \overline{\mu}} = \gamma\_{\tau}(\mu)^{\*}\gamma\_{\tau}(\lambda), \quad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}.$$

Let Y (·, λ) be a fundamental matrix fixed by Y (a, λ) = I. In a similar fashion one can transform Y (·, λ) to a fundamental matrix V (·, λ) given by

$$\begin{pmatrix} V\_1(\cdot,\lambda) & V\_2(\cdot,\lambda) \end{pmatrix} = \begin{pmatrix} Y\_1(\cdot,\lambda) & Y\_2(\cdot,\lambda) \end{pmatrix} \frac{1}{\sqrt{\tau^2+1}} \begin{pmatrix} \tau & 1\\ -1 & \tau \end{pmatrix}. \tag{7.8.17}$$

Note that V (a, λ)∗JV (a, λ) = J holds; cf. (7.2.8) and (7.2.11). Due to this transformation the γ-field γ<sup>τ</sup> can be written in terms of the new fundamental system as

$$
\gamma\_\tau(\lambda) = V\_1(\cdot,\lambda) + M\_\tau(\lambda)V\_2(\cdot,\lambda),
$$

which belongs to L<sup>2</sup> <sup>Δ</sup>(ı), while the second column of V (·, λ) satisfies the (formal) boundary condition which determines A<sup>τ</sup> = ker Γ<sup>τ</sup> 0 :

$$V\_2(a, \lambda) = \frac{1}{\sqrt{\tau^2 + 1}} \begin{pmatrix} 1 \\ \tau \end{pmatrix}. \tag{7.8.18}$$

The next proposition is the counterpart of Proposition 7.8.3 for the self-adjoint extensions A<sup>τ</sup> .

**Proposition 7.8.5.** Assume that the endpoint a is regular or quasiregular and that the endpoint b is in the limit-point case. Let Y (·, λ) be a fundamental matrix fixed by <sup>Y</sup> (a, λ) = <sup>I</sup>. Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>T</sup>max in Theorem 7.8.2. For <sup>τ</sup> <sup>∈</sup> <sup>R</sup> let <sup>A</sup><sup>τ</sup> be a self-adjoint extension of <sup>T</sup>min given by (7.8.14) and let <sup>M</sup><sup>τ</sup> be as in (7.8.16). Then the resolvent of A<sup>τ</sup> is an integral operator

$$\left(\left(A\_{\tau}-\lambda\right)^{-1}g\right)(t) = \int\_{a}^{b} G\_{\tau}(t,s,\lambda)\Delta(s)g(s)\,ds, \quad \lambda \in \mathbb{C} \; \backslash \mathbb{R},\tag{7.8.19}$$

where <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). The Green's function G<sup>τ</sup> (t, s, λ) is given by

$$G\_{\tau}(t, s, \lambda) = G\_{\tau, \mathbf{e}}(t, s, \lambda) + G\_{\tau, \mathbf{i}}(t, s, \lambda),\tag{7.8.20}$$

#### 7.8. Boundary triplets for the limit-point case 551

where the entire part Gτ,<sup>e</sup> is given by

$$G\_{\tau, \mathbf{c}}(t, s, \lambda) = \begin{cases} V\_1(t, \lambda) V\_2(s, \overline{\lambda})^\*, & s < t, \\ V\_2(t, \lambda) V\_1(s, \overline{\lambda})^\*, & s > t, \end{cases} \tag{7.8.21}$$

and

$$G\_{\tau, \mathbf{i}}(t, s, \lambda) = V\_2(t, \lambda) M\_\tau(\lambda) V\_2(s, \overline{\lambda})^\*. \tag{7.8.22}$$

Proof. The proof of Proposition 7.8.5 is similar to the proof of Proposition 7.8.3. In order to show the identity (7.8.19), consider <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) and define the function f by the right-hand side of (7.8.19). Then one has

$$\begin{split} f(t) &= \left(V\_1(t,\lambda) + V\_2(t,\lambda)M\_\tau(\lambda)\right) \int\_a^t V\_2(s,\overline{\lambda})^\* \Delta(s)g(s) \, ds \\ &\quad + V\_2(t,\lambda) \int\_t^b \left(V\_1(s,\overline{\lambda})^\* + M\_\tau(\overline{\lambda})^\*V\_2(s,\overline{\lambda})^\*\right) \Delta(s)g(s) \, ds \end{split} \tag{7.8.23}$$

and the same arguments as in the proof of Proposition 7.8.3 show that f is well defined, absolutely continuous, and satisfies the inhomogenenous equation

$$Jf' - Hf = \lambda \Delta f + \Delta g.\tag{7.8.24}$$

Moreover, one sees from (7.8.23) that f satisfies

$$\begin{aligned} f(a) &= \frac{1}{\sqrt{\tau^2 + 1}} \begin{pmatrix} 1 \\ \tau \end{pmatrix} \int\_a^b \left( V\_1(s, \overline{\lambda}) + V\_2(s, \overline{\lambda}) M\_\tau(\overline{\lambda}) \right)^\* \Delta(s) g(s) \, ds \\ &= \frac{1}{\sqrt{\tau^2 + 1}} \begin{pmatrix} 1 \\ \tau \end{pmatrix} (g, \gamma\_\tau(\overline{\lambda}))\_\Delta, \end{aligned}$$

and hence

$$\frac{\tau f\_1(a) - f\_2(a)}{\sqrt{\tau^2 + 1}} = 0 \quad \text{and} \quad \frac{f\_1(a) + \tau f\_2(a)}{\sqrt{\tau^2 + 1}} = (g, \gamma\_\tau(\overline{\lambda}))\_\Delta.$$

Now denote the function on the left-hand side of (7.8.19) by <sup>h</sup> = (A<sup>τ</sup> <sup>−</sup> <sup>λ</sup>)−1g. Then h also satisfies (7.8.24) and from (7.8.15) and Proposition 2.3.2 one obtains

$$\begin{aligned} \frac{\tau h\_1(a) - h\_2(a)}{\sqrt{\tau^2 + 1}} &= \Gamma\_0^\tau \{h, \lambda h + g\} = 0, \\\frac{h\_1(a) + \tau h\_2(a)}{\sqrt{\tau^2 + 1}} &= \Gamma\_1^\tau \{h, \lambda h + g\} = \gamma\_\tau(\overline{\lambda})^\* g = (g, \gamma\_\tau(\overline{\lambda}))\_\Delta. \end{aligned}$$

Thus, for a fixed <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the functions <sup>h</sup> and <sup>f</sup> satisfy the same initial value problem and hence it follows that <sup>h</sup> <sup>=</sup> <sup>f</sup>. Therefore, (A<sup>τ</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>g</sup> is given by the right-hand side of (7.8.19). -

Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>T</sup>max in Theorem 7.8.2 and consider the self-adjoint extension A<sup>τ</sup> = ker Γ<sup>τ</sup> ; cf. (7.8.14). Assume that the Weyl function <sup>M</sup><sup>τ</sup> corresponding to {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } has the integral representation

$$M\_{\tau}(\lambda) = \alpha\_{\tau} + \beta\_{\tau}\lambda + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) d\sigma\_{\tau}(t), \tag{7.8.25}$$

with <sup>α</sup><sup>τ</sup> <sup>∈</sup> <sup>R</sup>, <sup>β</sup><sup>τ</sup> <sup>≥</sup> 0, and <sup>σ</sup><sup>τ</sup> a nondecreasing function with

$$\int\_{\mathbb{R}} \frac{1}{t^2 + 1} \, d\sigma\_{\tau}(t) < \infty.$$

Recall from Theorem 3.5.10 and Lemma A.2.6 that mul A<sup>τ</sup> mul Tmin is nontrivial if and only if β<sup>τ</sup> > 0.

**Lemma 7.8.6.** Let <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞} and let <sup>E</sup><sup>τ</sup> (·) be the spectral measure of the selfadjoint relation <sup>A</sup><sup>τ</sup> . For <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) with compact support define the Fourier transform f by

$$
\widehat{f}(\mu) = \int\_a^b V\_2(s, \mu)^\* \Delta(s) f(s) \, ds, \quad \mu \in \mathbb{R},
$$

where V2(·, μ) is the formal solution in (7.8.17). Let σ<sup>τ</sup> be the function in the integral representation (7.8.25) of the Weyl function M<sup>τ</sup> . Then for every bounded open interval <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> such that its endpoints are not eigenvalues of <sup>A</sup><sup>τ</sup> one has

$$(E\_\tau(\delta)f, f)\_\Delta = \int\_\delta \widehat{f}(\mu) \, \overline{\widehat{f}(\mu)} \, d\sigma\_\tau(\mu). \tag{7.8.26}$$

Proof. Recall that (A<sup>τ</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is given by (7.8.19), where the Green function G<sup>τ</sup> (t, s, λ) in (7.8.20) is given by (7.8.21) and (7.8.22). Assume that the function <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> Δ(ı) has compact support in [a- , b- ] ⊂ ı. Then

$$\left(\left(A\_{\tau}-\lambda\right)^{-1}f,f\right)\_{\Delta} = \int\_{a}^{b} f(t)^{\*}\Delta(t)\left(\int\_{a}^{b} G\_{\tau}(t,s,\lambda)\Delta(s)f(s)\,ds\right)dt$$

for each <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, where, in fact, the integration takes place only on the square [a- , b- ] × [a- , b- ].

Let <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> be a bounded interval such that the endpoints of <sup>δ</sup> are not eigenvalues of A<sup>τ</sup> . Then the spectral projection of A<sup>τ</sup> corresponding to the interval δ is given by Stone's formula (1.5.7) (see also Example A.1.4)

(E(δ)f,f)<sup>Δ</sup> = lim<sup>ε</sup> <sup>↓</sup> <sup>0</sup> 1 2πi δ (A<sup>τ</sup> <sup>−</sup> (<sup>μ</sup> <sup>+</sup> iε))−<sup>1</sup> <sup>−</sup> (A<sup>τ</sup> <sup>−</sup> (<sup>μ</sup> <sup>−</sup> iε))−1 f,f <sup>Δ</sup>dμ = lim<sup>ε</sup> <sup>↓</sup> <sup>0</sup> 1 2πi δ <sup>b</sup> a <sup>b</sup> a f(t) <sup>∗</sup>Δ(t) G<sup>τ</sup> (t, s, μ + iε) − G<sup>τ</sup> (t, s, μ − iε) Δ(s)f(s) ds dt dμ.

Decompose the Green function in (7.8.20) as in (7.8.21) and (7.8.22). Since the function λ → V (t, λ) is entire, one verifies in the same way as in the proof of Theorem 7.8.4 that

$$\begin{split} \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \left( \int\_{a}^{b} \int\_{a}^{b} f(t)^{\*} \Delta(t) \left( G\_{\tau, \mathbf{e}}(t, s, \mu + i\varepsilon) \right) \right. \\ & \left. -G\_{\tau, \mathbf{e}}(t, s, \mu - i\varepsilon) \right) \Delta(s) f(s) \, ds \, dt \right) d\mu = 0. \end{split}$$

Therefore, it remains to consider the corresponding integral with Gτ,i, which takes the form

$$\lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \left( \int\_{a}^{b} \int\_{a}^{b} f(t)^{\*} \Delta(t) \left( V\_{2}(t, \mu + i\varepsilon) M\_{\tau}(\mu + i\varepsilon) V\_{2}(s, \mu - i\varepsilon)^{\*} \right. \right. \tag{7.8}$$

$$-V\_{2}(t, \mu - i\varepsilon) M\_{\tau}(\mu - i\varepsilon) V\_{2}(s, \mu + i\varepsilon)^{\*} \left( \Delta(s) f(s) \, ds \, dt \right) d\mu$$

$$= \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \left( \int\_{a}^{b} \int\_{a}^{b} f(t)^{\*} \Delta(t) \left[ (g\_{t,s} M\_{\tau})(\mu + i\varepsilon) \right. \right. \tag{7.9}$$

$$- (g\_{t,s} M\_{\tau})(\mu - i\varepsilon) \left[ \Delta(s) f(s) \, ds \, dt \right] d\mu,$$

where gt,s stands for the 2 × 2 matrix function

$$g\_{t,s}(\eta) = V\_2(t,\eta) \, V\_2(s,\overline{\eta})^\*.$$

For t, s ∈ [a- , b- ] this function is entire in η. For ε<sup>0</sup> > 0 and A<B such that δ ⊂ (A, B) consider the rectangle R = [A, B] × [−iε0, iε0]. Then the function {t, s, η} → gt,s(η) is bounded on [a- , b- ] × [a- , b- ] <sup>×</sup> <sup>R</sup>, and since Δ<sup>h</sup> <sup>∈</sup> <sup>L</sup>1(a- , b- ), it follows that for each fixed ε such that 0 < ε ≤ ε<sup>0</sup>

$$\frac{1}{2\pi i} \int\_{\delta} \left( \int\_{a}^{b} \int\_{a}^{b} f(t)^{\*} \Delta(t) \left[ (g\_{t,s} M\_{\tau}) (\mu + i\varepsilon) - (g\_{t,s} M\_{\tau}) (\mu - i\varepsilon) \right] \Delta(s) f(s) \, ds \, dt \right) d\mu$$

$$= \frac{1}{2\pi i} \int\_{a}^{b} \int\_{a}^{b} f(t)^{\*} \Delta(t) \left( \int\_{\delta} \left[ (g\_{t,s} M\_{\tau}) (\mu + i\varepsilon) - (g\_{t,s} M\_{\tau}) (\mu - i\varepsilon) \right] \, d\mu \right) \Delta(s) f(s) \, ds \, dt \, ds$$

By the Stieltjes inversion formula in Lemma A.2.7 and Remark A.2.10, one sees

$$\lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{\delta} \left[ (g\_{t,s} M\_{\tau}) (\mu + i\varepsilon) - (g\_{t,s} M\_{\tau}) (\mu - i\varepsilon) \right] d\mu = \int\_{\delta} g\_{t,s,} (\mu) \, d\sigma\_{\tau}(\mu)$$

for all t, s ∈ [a- , b- ]. To justify taking the limit ε ↓ 0 inside the integral (7.8.27) one needs dominated convergence. For this purpose recall from Lemma A.2.7 and Remark A.2.10 that there exists m ≥ 0 such that for 0 < ε ≤ ε<sup>0</sup> one has

$$\begin{split} \left| \int\_{\delta} [(g\_{t,s}M\_{\tau})(\mu + i\varepsilon) - (g\_{t,s}M\_{\tau})(\mu - i\varepsilon)] \, d\mu \right| \\ \leq m \sup \{ |g\_{t,s}(\eta)|, |g'\_{t,s}(\eta)| : t, s \in [a', b'], \,\lambda \in R \}, \end{split} \tag{7.8.28}$$

where R = [A, B] × [−iε0, iε0]. Since the functions

$$\{t, s, \eta\} \mapsto g\_{t,s}(\eta) \quad \text{and} \quad \{t, s, \eta\} \mapsto g'\_{t,s}(\eta).$$

are bounded on [a- , b- ]×[a- , b- ]×R, it follows that the integral in (7.8.28), regarded as a function in {t, s} on [a- , b- ] × [a- , b- ], is bounded by some constant for all <sup>0</sup> < ε <sup>≤</sup> <sup>ε</sup>0. Furthermore, Δ<sup>f</sup> <sup>∈</sup> <sup>L</sup>1(a- , b- ) implies that there is an integrable majorant for (7.8.27), and so dominated convergence and Fubini's theorem show that

$$\begin{aligned} (E(\delta)f,f)\_{\Delta} &= \int\_{a}^{b} \int\_{a}^{b} f(t)^{\*} \Delta(t) \int\_{\delta} g\_{t,s}(\mu) \, d\sigma\_{\tau}(\mu) \, \Delta(s) f(s) \, ds \, dt \\ &= \int\_{\delta} \left( \int\_{a}^{b} \left( V\_{2}(t,\mu)^{\*} \Delta(t) f(t) \right)^{\*} \, dt \right) \left( \int\_{a}^{b} V\_{2}(s,\mu)^{\*} \Delta(s) f(s) \, ds \right) d\sigma\_{\tau}(\mu) \, ds \end{aligned}$$

for every open interval δ such that the endpoints are not eigenvalues of A<sup>τ</sup> . This gives the formula in (7.8.26). -

The next theorem is a consequence of Lemma 7.8.6 and Theorem B.2.3.

**Theorem 7.8.7.** Let <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞}, let <sup>V</sup>2(·, μ) be the formal solution in (7.8.17), and let σ<sup>τ</sup> be the function in the integral representation of the Weyl function M<sup>τ</sup> . Then the Fourier transform

$$f \mapsto \widehat{f}, \qquad \widehat{f}(\mu) = \int\_a^b V\_2(s, \mu)^\* \Delta(s) f(s) \, ds, \quad \mu \in \mathbb{R},$$

extends by continuity from compactly supported functions <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) to a surjective partial isometry F from L<sup>2</sup> <sup>Δ</sup>(ı) to L<sup>2</sup> dσ<sup>τ</sup> (R) with ker F = mul A<sup>τ</sup> . The restriction Fop : L<sup>2</sup> <sup>Δ</sup>(ı) mul <sup>A</sup><sup>τ</sup> <sup>→</sup> <sup>L</sup><sup>2</sup> dσ<sup>τ</sup> (R) is a unitary mapping, such that the self-adjoint operator (A<sup>τ</sup> )op in L<sup>2</sup> <sup>Δ</sup>(ı) mul A<sup>τ</sup> is unitarily equivalent to multiplication by the independent variable in L<sup>2</sup> dσ<sup>τ</sup> (R).

Proof. It follows from Lemma 7.8.6 that the condition (B.2.2) is satisfied. Furthermore, for every <sup>μ</sup> <sup>∈</sup> <sup>R</sup> there exists a compactly supported function <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) such that

$$\widehat{f}(\mu) = \int\_a^b V\_2(s,\mu)^\* \Delta(s) f(s) \, ds \neq 0.$$

To see this, assume that for some <sup>μ</sup> <sup>∈</sup> <sup>R</sup>

$$\int\_a^b V\_2(s,\mu)^\* \Delta(s) f(s) \, ds = 0$$

for all compactly supported <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). This implies that V2(s, μ)∗Δ(s) = 0 for a.e. s ∈ (a, b). By definiteness one has V2(s, μ) = 0 for a.e. s ∈ (a, b), which is a contradiction. Therefore, condition (B.2.9) is satisfied and the result follows from Theorem B.2.3. -

In the next lemma the Fourier transform Fγ<sup>τ</sup> of the γ-field in (7.8.16) corresponding to the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } is computed; this allows to identify the model in Theorem 7.8.7 with the model for scalar Nevanlinna functions in Section 4.3.

**Lemma 7.8.8.** Let <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞} and let <sup>γ</sup><sup>τ</sup> be the <sup>γ</sup>-field in (7.8.16). Then for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one has almost everywhere in the sense of dσ<sup>τ</sup> :

$$\left[\mathcal{F}\gamma\_{\tau}(\lambda)\right](\mu) = \frac{1}{\mu - \lambda}, \quad \mu \in \mathbb{R},$$

where F is the Fourier transform from L<sup>2</sup> <sup>Δ</sup>(ı) onto L<sup>2</sup> dσ<sup>τ</sup> (R) in Theorem 7.8.7.

Proof. Recall first that for <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the function (A<sup>τ</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>g</sup> is given by the identity (7.8.19), which holds for all t ∈ ı. In fact, the Green function G<sup>τ</sup> in (7.8.19) is a 2 × 2 matrix function and the following notation will be useful:

$$G\_{\tau}(t,s,\lambda) = \begin{pmatrix} G\_{\tau,1}(t,s,\lambda) \\ G\_{\tau,2}(t,s,\lambda) \end{pmatrix},$$

where each of these components is a 1×2 matrix function. The fundamental matrix V (·, λ) in (7.8.17) is written in the form

$$
\begin{pmatrix} V\_1(\cdot,\lambda) & V\_2(\cdot,\lambda) \end{pmatrix} = \begin{pmatrix} V\_{11}(\cdot,\lambda) & V\_{12}(\cdot,\lambda) \\ V\_{21}(\cdot,\lambda) & V\_{22}(\cdot,\lambda) \end{pmatrix} \cdot \vec{\lambda}
$$

Now observe that

$$G\_{\tau,1}(t,s,\lambda) = \begin{cases} \left(V\_{11}(t,\lambda) + M\_{\tau}(\lambda)V\_{12}(t,\lambda)\right)V\_{2}(s,\overline{\lambda})^{\*}, & s < t, \\ V\_{12}(t,\lambda)\gamma\_{\tau}(s,\overline{\lambda})^{\*}, & s > t, \end{cases} \tag{7.8.29}$$

and

$$G\_{\tau,2}(t,s,\lambda) = \begin{cases} \left(V\_{21}(t,\lambda) + M\_{\tau}(\lambda)V\_{22}(t,\lambda)\right)V\_{2}(s,\overline{\lambda})^{\*}, & s < t, \\ V\_{22}(t,\lambda)\gamma\_{\tau}(s,\overline{\lambda})^{\*}, & s > t, \end{cases} \tag{7.8.30}$$

which follows easily from (7.8.21) and (7.8.22); cf. (7.8.23). Note also that

$$V\_{ij}(\cdot,\overline{\lambda}) = \overline{V\_{ij}(\cdot,\lambda)}, \qquad i,j = 1,2,3$$

since the system is real (see Lemma 7.2.8), and that (7.2.10) implies the useful identity

$$V\_{11}(t,\lambda)V\_{22}(t,\lambda) - V\_{21}(t,\lambda)V\_{12}(t,\lambda) = 1, \quad t \in \iota. \tag{7.8.31}$$

For <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one has

$$((A\_\tau - \lambda)^{-1}g)(t) = \begin{pmatrix} \int\_a^b G\_{\tau,1}(t, s, \lambda) \Delta(s) g(s) \, ds\\ \int\_a^b G\_{\tau,2}(t, s, \lambda) \Delta(s) g(s) \, ds \end{pmatrix} = \begin{pmatrix} (g, G\_{\tau,1}(t, \cdot, \lambda)^\*)\_\Delta\\ (g, G\_{\tau,2}(t, \cdot, \lambda)^\*)\_\Delta \end{pmatrix}.$$

Since the Fourier transform F : L<sup>2</sup> <sup>Δ</sup>(ı) <sup>→</sup> <sup>L</sup><sup>2</sup> dσ<sup>τ</sup> (R) in Theorem 7.8.7 is a partial isometry with ker F = mul A<sup>τ</sup> , it follows from (1.1.10) that

$$\begin{aligned} \left( (A\_{\tau} - \lambda)^{-1} g \right)(t) &= \begin{pmatrix} \left( \mathcal{G}g, \mathcal{F}G\_{\tau,1}(t, \cdot, \lambda)^{\*} \right)\_{L^{2}\_{d\sigma\_{\tau}}(\mathbb{R})} \\ \left( \mathcal{G}g, \mathcal{F}G\_{\tau,2}(t, \cdot, \lambda)^{\*} \right)\_{L^{2}\_{d\sigma\_{\tau}}(\mathbb{R})} \end{pmatrix} \\ &= \begin{pmatrix} \int\_{\mathbb{R}} \mathcal{F}g(\mu) \overline{\left[ \mathcal{G}G\_{\tau,1}(t, \cdot, \lambda)^{\*} \right](\mu)} \, d\sigma\_{\tau}(\mu) \\ \int\_{\mathbb{R}} \mathcal{F}g(\mu) \overline{\left[ \mathcal{G}G\_{\tau,2}(t, \cdot, \lambda)^{\*} \right](\mu)} \, d\sigma\_{\tau}(\mu) \end{pmatrix} \end{aligned} \tag{7.8.32}$$

is valid for all g ∈ (mul A<sup>τ</sup> )⊥. It is clear that (7.8.32) is also true for all g ∈ mul A<sup>τ</sup> , since in this case (A<sup>τ</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>g</sup> = 0 and <sup>F</sup><sup>g</sup> = 0. Therefore, (7.8.32) is valid for all <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). Moreover, if <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) one has for almost all t ∈ ı

$$\begin{split} \left( (A\_{\tau} - \lambda)^{-1} g \right)(t) &= \int\_{\mathbb{R}} \frac{V\_{2}(t, \mu)}{\mu - \lambda} \, \mathcal{F} g(\mu) \, d\sigma\_{\tau}(\mu) \\ &= \begin{pmatrix} \int\_{\mathbb{R}} \frac{V\_{12}(t, \mu)}{\mu - \lambda} \, \mathcal{F} g(\mu) \, d\sigma\_{\tau}(\mu) \\ \int\_{\mathbb{R}} \frac{V\_{22}(t, \mu)}{\mu - \lambda} \, \mathcal{F} g(\mu) \, d\sigma\_{\tau}(\mu) \end{pmatrix}; \end{split} \tag{7.8.33}$$

see (B.2.5). Furthermore, if Fg has compact support, then the right-hand side of (7.8.33) is absolutely continuous, and hence in this case the equality holds for all t ∈ ı. Therefore, when Fg has compact support the right-hand sides of (7.8.32) and (7.8.33) are equal for all t ∈ ı, and hence one has

$$\frac{V\_{12}(t,\mu)}{\mu-\lambda} = \overline{\left[\mathcal{F}G\_{\tau,1}(t,\cdot,\lambda)^{\*}\right](\mu)}\quad \text{and}\quad \frac{V\_{22}(t,\mu)}{\mu-\lambda} = \overline{\left[\mathcal{F}G\_{\tau,2}(t,\cdot,\lambda)^{\*}\right](\mu)}$$

for all <sup>t</sup> <sup>∈</sup> <sup>ı</sup>. These identities hold for all <sup>μ</sup> <sup>∈</sup> <sup>R</sup> \ Ω(t), where the set Ω(t) <sup>⊂</sup> <sup>R</sup> has dσ<sup>τ</sup> -measure 0. Hence, (by replacing λ with λ and taking conjugates) one has for all <sup>t</sup> <sup>∈</sup> <sup>ı</sup> and all <sup>μ</sup> <sup>∈</sup> <sup>R</sup> \ Ω(t)

$$\frac{V\_{12}(t,\mu)}{\mu-\lambda} = \left[\mathcal{F}G\_{\tau,1}(t,\cdot,\overline{\lambda})^\*\right](\mu) \quad \text{and} \quad \frac{V\_{22}(t,\mu)}{\mu-\lambda} = \left[\mathcal{F}G\_{\tau,2}(t,\cdot,\overline{\lambda})^\*\right](\mu). \tag{7.8.34}$$

By means of (7.8.29), (7.8.30), (7.8.31), and (7.8.34) it is straightforward to verify that for all <sup>t</sup> <sup>∈</sup> <sup>ı</sup> and all <sup>μ</sup> <sup>∈</sup> <sup>R</sup> \ Ω(t),

$$\begin{split} \frac{1}{\mu-\lambda} \Big( V\_{11}(t,\lambda)V\_{22}(t,\mu) - V\_{21}(t,\lambda)V\_{12}(t,\mu) \Big) \\ = & V\_{11}(t,\lambda) \Big[ \mathcal{G}G\_{\tau,2}(t,\cdot,\overline{\lambda})^\* \Big] (\mu) - V\_{21}(t,\lambda) \Big[ \mathcal{G}G\_{\tau,1}(t,\cdot,\overline{\lambda})^\* \Big] (\mu) \\ = & \mathcal{G} \Big[ V\_{11}(t,\lambda) \, G\_{\tau,2}(t,\cdot,\overline{\lambda})^\* - V\_{21}(t,\lambda) \, G\_{\tau,1}(t,\cdot,\overline{\lambda})^\* \Big] (\mu) \\ = & \mathcal{F} \Big[ W(t,\cdot,\lambda) \Big] (\mu), \end{split} \tag{7.8.35}$$

where the 2 × 1 matrix function W(·, ·, λ) is given by

$$W(t,s,\lambda) = \begin{cases} M\_\tau(\lambda)V\_2(s,\lambda), & s < t, \\ \gamma\_\tau(s,\lambda), & s > t. \end{cases} \tag{7.8.36}$$

The above identity and a limit process will give the desired result. In fact, first observe that according to the definition of W(·, ·, λ) in (7.8.36) one has

$$\|\|\gamma\_\tau(\cdot,\lambda) - W(t,\cdot,\lambda)\|\|\_{\Delta}^2 = \int\_a^t |\Delta(s)^{\frac{1}{2}} V\_1(s,\lambda)|^2 \, ds \to 0 \quad \text{as} \quad t \to a,$$

and hence the continuity of F : L<sup>2</sup> <sup>Δ</sup>(ı) <sup>→</sup> <sup>L</sup><sup>2</sup> dσ<sup>τ</sup> (R) implies that

$$\left\| \left[ \mathcal{F} \gamma\_{\tau}(\cdot, \lambda) \right] - \left[ \mathcal{F} W(t, \cdot, \lambda) \right] \right\|\_{L^{2}\_{d\sigma\_{\tau}}(\mathbb{R})} \to 0 \quad \text{as} \quad t \to a.$$

Now approximate a by a sequence t<sup>n</sup> ∈ ı. Then there exists a subsequence, again denoted by tn, such that pointwise

$$\left[\mathcal{F}\gamma\_{\tau}(\cdot,\lambda)\right](\mu) = \lim\_{n\to\infty} \left[\mathcal{F}W(t\_n,\cdot,\lambda)\right](\mu), \quad \mu \in \mathbb{R} \backslash \Omega,$$

where Ω is a set of measure 0 in the sense of dσ<sup>τ</sup> . Observe that (7.8.35) gives

$$\left[\mathcal{F}W(t\_n,\cdot,\lambda)\right](\mu) = \frac{1}{\mu-\lambda} \left(V\_{11}(t\_n,\lambda)V\_{22}(t\_n,\mu) - V\_{21}(t\_n,\lambda)V\_{12}(t\_n,\mu)\right)$$

for all <sup>μ</sup> <sup>∈</sup> <sup>R</sup> \ Ω(tn). The limit on the right-hand side as <sup>n</sup> → ∞ gives

$$\frac{1}{\mu-\lambda} \left( V\_{11}(a,\lambda)V\_{22}(a,\mu) - V\_{21}(a,\lambda)V\_{12}(a,\mu) \right) = \frac{1}{\mu-\lambda},$$

which follows from the form of the fundamental matrix (V1(·,λ)V2(·,λ)) in (7.8.17). Hence,

$$\left[\mathcal{F}\gamma\_{\tau}(\cdot,\lambda)\right](\mu) = \frac{1}{\mu-\lambda}, \quad \mu \in \mathbb{R} \mid \left(\Omega \cup \bigcup\_{n=1}^{\infty} \Omega(t\_n)\right),$$

which completes the proof. -

Lemma 7.8.8 will be used to explain the model in Theorem 7.8.7 with the model for scalar Nevanlinna functions discussed in Section 4.3. Without loss of generality it is assumed that Tmin is simple; cf. Section 3.4. The Weyl function <sup>M</sup><sup>τ</sup> of the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } for Tmax has the integral representation (7.8.25). If β = 0, then the discussion in Chapter 6 following Lemma 6.4.8 applies in this case as well. Hence, assume β > 0 in (7.8.25). Then by Theorem 4.3.4 there is a closed simple symmetric operator S in L<sup>2</sup> dσ<sup>τ</sup> (R) <sup>⊕</sup> <sup>C</sup> such that the Nevanlinna function M<sup>τ</sup> in (7.8.25) is the Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} for <sup>S</sup><sup>∗</sup> in Theorem 4.3.4. The <sup>γ</sup>-field corresponding to {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} is denoted by γ and it is given by (4.3.16). Furthermore, the restriction A- <sup>0</sup> corresponding to the boundary mapping Γ- <sup>0</sup> is a self-adjoint relation in L<sup>2</sup> dσ<sup>τ</sup> (R) <sup>⊕</sup> <sup>C</sup> whose operator part (A- <sup>0</sup>)op is the maximal multiplication operator by the independent variable in L<sup>2</sup> dσ<sup>τ</sup> (R). By comparing with (4.3.16) one sees that, according to Lemma 7.8.8, the Fourier transform Fop from L<sup>2</sup> <sup>Δ</sup>(ı) mul <sup>A</sup><sup>τ</sup> onto <sup>L</sup><sup>2</sup> dσ<sup>τ</sup> (R) as a unitary mapping satisfies

$$
\mathcal{F}\_{\text{op}} \, P \gamma\_\tau (\lambda) = P' \gamma'(\lambda),
$$

where P and P stand for the orthogonal projections from L<sup>2</sup> <sup>Δ</sup>(ı) onto (mul A<sup>τ</sup> )<sup>⊥</sup> and from L<sup>2</sup> dσ<sup>τ</sup> (R) <sup>⊕</sup> <sup>C</sup> onto <sup>L</sup><sup>2</sup> dσ<sup>τ</sup> (R) = (mul A- <sup>0</sup>)⊥. Recall that

$$(I - P)\gamma\_\tau(\lambda) \quad \text{and} \quad (I - P')\gamma'(\lambda)$$

are independent of λ and belong to mul A<sup>τ</sup> and mul A- 0, respectively; cf. Corollary 2.5.16. Hence, the mapping F<sup>m</sup> from mul A<sup>τ</sup> to mul A- <sup>0</sup> defined by

$$\mathcal{F}\_{\mathrm{m}}(I - P)\gamma\_{\tau}(\lambda) = \beta^{\frac{1}{2}} = (I - P')\gamma'(\lambda)$$

is a one-to-one correspondence. In fact, F<sup>m</sup> is an isometry due to Proposition 3.5.7. Define the mapping U from the space L<sup>2</sup> <sup>Δ</sup>(ı) to the model space L<sup>2</sup> dσ<sup>τ</sup> (R) <sup>⊕</sup> <sup>C</sup> by

$$U = \begin{pmatrix} \mathcal{F}\_{\text{op}} & 0\\ 0 & \mathcal{F}\_{\text{m}} \end{pmatrix} : \begin{pmatrix} (\text{mul}\,A\_{\tau})^{\perp} \\ \text{mul}\,A\_{\tau} \end{pmatrix} \to \begin{pmatrix} (\text{mul}\,A\_{0}')^{\perp} \\ \text{mul}\,A\_{0}' \end{pmatrix}.$$

Then it is clear that U is unitary and that

$$U\gamma\_\tau(\lambda) = \gamma'(\lambda).$$

Hence, by Theorem 4.2.6, it follows that the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } for Tmax and the boundary triplet {C, <sup>Γ</sup>- <sup>0</sup>, Γ- <sup>1</sup>} for S<sup>∗</sup> are unitarily equivalent under the mapping U, in particular, one has

$$(A'\_0)\_{\mathrm{op}} = \mathcal{F}\_{\mathrm{op}}(A\_\tau)\_{\mathrm{op}} \mathcal{F}\_{\mathrm{op}}^{-1} \quad \text{and} \quad A'\_0 = U A\_\tau U^{-1}.$$

At the end of this section the case where the endpoint a is in the limitcircle case and the endpoint b is in the limit-point case is briefly discussed. In a similar way as in the end of Section 7.7 one makes use of the transformation in Lemma 7.2.5. The next proposition is the counterpart of Theorem 7.8.2; it is proved in the same way.

**Proposition 7.8.9.** Assume that a is in the limit-circle case and that b is in the limit-point case. Let <sup>λ</sup><sup>0</sup> in <sup>R</sup>, let <sup>U</sup>(·, λ0) be a solution matrix as in Lemma 7.2.5, and consider the limit

$$\widetilde{f}(a) = \lim\_{t \to a} U(t, \lambda\_0)^{-1} f(t)$$

for {f,g} ∈ <sup>T</sup>max ; cf. Corollary 7.4.8. Then {C, <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0\{f,g\} = \bar{f}\_1(a) \quad \text{and} \quad \Gamma\_1\{f,g\} = \bar{f}\_2(a), \quad \{f,g\} \in T\_{\text{max}}\,,
$$

is a boundary triplet for (Tmin )<sup>∗</sup> = Tmax . Let Y (·, λ) be a fundamental matrix fixed in such a way that <sup>Y</sup>(·, λ) = <sup>U</sup>(·, λ0)−<sup>1</sup><sup>Y</sup> (·, λ) satisfies <sup>Y</sup>(a, λ) = <sup>I</sup>. Then for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the <sup>γ</sup>-field <sup>γ</sup> and Weyl function <sup>M</sup> corresponding to {C, <sup>Γ</sup>0, <sup>Γ</sup>1} are given by

$$
\gamma(\lambda) = Y\_1(\cdot, \lambda) + M(\lambda)Y\_2(\cdot, \lambda) \quad \text{and} \quad M(\lambda) = \frac{\tilde{\chi}\_2(a, \lambda)}{\tilde{\chi}\_1(a, \lambda)},
$$

where <sup>χ</sup>(·, λ) = <sup>U</sup>(·, λ0)−1χ(·, λ) and <sup>χ</sup>(·, λ) is a nontrivial element in <sup>N</sup>λ(Tmax ).

## **7.9 Weyl functions and subordinate solutions**

Consider the real definite canonical system (7.2.3) on the interval ı = (a, b) and assume that the endpoint a is regular and that the endpoint b is in the limit-point case. Let {C, <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for <sup>T</sup>max in Theorem 7.7.2 with γ-field γ and Weyl function M. The spectrum of the self-adjoint extension

$$A\_0 = \ker \Gamma\_0 = \left\{ \{f, g\} \in T\_{\text{max}} \; : \; f\_1(a) = 0 \right\}.$$

will be studied by means of subordinate solutions of the equation Jy-−Hy = λΔy. The discussion in this section is parallel to the discussion in Section 6.7.

It is useful to take into account all self-adjoint extensions of Tmin . As in Section 7.8, there is a one-to-one correspondence to the numbers <sup>τ</sup> <sup>∈</sup> <sup>R</sup> ∪ {∞} as restrictions of Tmax via

$$A\_{\tau} = \left\{ \{f, g\} \in T\_{\text{max}} \; : \; \Gamma\_1 \{f, g\} = \tau \Gamma\_0 \{f, g\} \right\},\tag{7.9.1}$$

with the understanding that A<sup>0</sup> corresponds to τ = ∞. As before, let the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> } be defined by the transformation (7.8.15) with the Weyl function and γ-field given by

$$M\_{\tau}(\lambda) = \frac{1 + \tau M(\lambda)}{\tau - M(\lambda)} \quad \text{and} \quad \gamma\_{\tau}(\lambda) = \frac{\sqrt{\tau^2 + 1}}{\tau - M(\lambda)} \gamma(\lambda), \quad \lambda \in \mathbb{C} \tag{7.9.2}$$

Recall that the Weyl function and the γ-field are connected via

$$\frac{M\_{\tau}(\lambda) - M\_{\tau}(\mu)^{\*}}{\lambda - \overline{\mu}} = \gamma\_{\tau}(\mu)^{\*}\gamma\_{\tau}(\lambda), \quad \lambda, \mu \in \mathbb{C} \backslash \mathbb{R}. \tag{7.9.3}$$

The transformation (7.8.15) also induces a transformation of the fundamental matrix Y (·, λ) with Y (a, λ) = I as in (7.8.17):

$$
\begin{pmatrix} V\_1(\cdot,\lambda) & V\_2(\cdot,\lambda) \end{pmatrix} = \begin{pmatrix} Y\_1(\cdot,\lambda) & Y\_2(\cdot,\lambda) \end{pmatrix} \frac{1}{\sqrt{\tau^2+1}} \begin{pmatrix} \tau & 1 \\ -1 & \tau \end{pmatrix} . \tag{7.9.4}$$

Recall that

$$\gamma\_{\tau}(\lambda) = V\_1(\cdot, \lambda) + M\_{\tau}(\lambda)V\_2(\cdot, \lambda)$$

is square-integrable with respect to Δ, while the second column of V (·, λ) satisfies the (formal) boundary condition which determines A<sup>τ</sup> ; cf. (7.8.18).

In the next estimates it is more convenient to work in the semi-Hilbert space L2 <sup>Δ</sup>(ı) rather than in the Hilbert space L<sup>2</sup> <sup>Δ</sup>(ı). In fact, for each x>a the notation L2 <sup>Δ</sup>(a, x) stands for the semi-Hilbert space with the semi-inner product

$$(f,g)\_x = \int\_a^x g(t)^\* \Delta(t) f(t) \, dt, \quad f, g \in \mathcal{L}^2\_\Delta(a,x),$$

and the seminorm corresponding to (·, ·)<sup>x</sup> will be denoted by · <sup>x</sup>; here the index Δ is omitted. Hence, for fixed f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, c), a<c<b, the function x → (f,g)<sup>x</sup> is absolutely continuous and

$$\frac{d}{dx}(f,g)\_x = g(x)^\* \Delta(x) f(x) \tag{7.9.5}$$

holds almost everywhere on (a, c).

**Definition 7.9.1.** Let <sup>λ</sup> <sup>∈</sup> <sup>C</sup>. Then a solution <sup>h</sup>(·, λ) of Jh- − Hh = λΔh is said to be subordinate at b if

$$\lim\_{x \to b} \frac{\|h(\cdot, \lambda)\|\_x}{\|k(\cdot, \lambda)\|\_x} = 0$$

for every nontrivial solution k(·, λ) of Jk- − Hk = λΔk which is not a scalar multiple of h(·, λ).

The spectrum of the self-adjoint extension A<sup>0</sup> will be studied in terms of solutions of the canonical system Jy-<sup>−</sup>Hy <sup>=</sup> <sup>ξ</sup>Δy, <sup>ξ</sup> <sup>∈</sup> <sup>R</sup>, which do not necessarily belong to L<sup>2</sup> <sup>Δ</sup>(a, b). Observe that if a solution h(·, ξ) of Jy- − Hy = ξΔy belongs to L<sup>2</sup> <sup>Δ</sup>(a, b), then it is subordinate at b since b is in the limit-point case, and hence any other nontrivial solution which is not a multiple does not belong to L<sup>2</sup> <sup>Δ</sup>(a, b).

By means of the fundamental system (V1(·, λ) V2(·, λ)) in (7.8.17) define for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, x), a<x<b,

$$\begin{aligned} (\mathcal{H}(\lambda)h)(t) &= V\_1(t,\lambda) \int\_a^t V\_2(s,\overline{\lambda})^\* \Delta(s) h(s) \, ds \\ &- V\_2(t,\lambda) \int\_a^t V\_1(s,\overline{\lambda})^\* \Delta(s) h(s) \, ds, \quad t \in (a,x). \end{aligned}$$

Thus, H(λ) is a well-defined integral operator and it is clear that the function H(λ)h is absolutely continuous. Using the identity (7.2.10) for V (·, λ) (which holds because V (a, λ)∗JV (a, λ) = J) one sees in the same way as in (7.2.14)–(7.2.16) that f = H(λ)h satisfies

$$Jf' - Hf = \lambda \Delta f + \Delta h, \quad f(a) = 0.$$

In particular, H(λ) maps L<sup>2</sup> <sup>Δ</sup>(a, x) into itself. It follows directly that for λ, μ <sup>∈</sup> <sup>C</sup> one has

$$V\_i(\cdot,\lambda) - V\_i(\cdot,\mu) = (\lambda - \mu)\mathcal{H}(\lambda)V\_i(\cdot,\mu), \quad i = 1,2,\tag{7.9.6}$$

since the functions on the left-hand side and the right-hand side both satisfy the same equation Jy- − Hy = λΔy + (λ − μ)ΔVi(·, μ) and both functions vanish at the endpoint a.

**Lemma 7.9.2.** Let a<x<b and let <sup>h</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, x). Then the operator H(λ) satisfies

$$\left\|\mathcal{H}(\lambda)h\right\|\_{x}^{2} \le 2\left\|V\_{1}(\cdot,\lambda)\right\|\_{x}^{2}\left\|V\_{2}(\cdot,\lambda)\right\|\_{x}^{2}\left\|h\right\|\_{x}^{2}$$

for each x>a.

Proof. The definition of H(λ) may be written as

$$(\mathcal{H}(\lambda)h)(t) = V\_1(t,\lambda)g\_2(t,\lambda) - V\_2(t,\lambda)g\_1(t,\lambda),\tag{7.9.7}$$

with the functions gi(·, λ), i = 1, 2, defined by

$$g\_i(t, \lambda) = \int\_a^t V\_i(s, \overline{\lambda})^\* \Delta(s) h(s) \, ds.$$

The Cauchy–Schwarz inequality and Corollary 7.2.9 show that

$$\|\left|g\_i(t,\lambda)\right|^2 \le \|V\_i(\cdot,\overline{\lambda})\|\_t^2 \|h\|\_t^2 = \|V\_i(\cdot,\lambda)\|\_t^2 \|h\|\_t^2, \quad i = 1,2. \tag{7.9.8}$$

Multiplying (H(λ)h)(t) in (7.9.7) on the left by the matrix Δ(t) 1 <sup>2</sup> , using the inequality |a + b| <sup>2</sup> <sup>≤</sup> 2(|a<sup>|</sup> <sup>2</sup> <sup>+</sup> <sup>|</sup>b<sup>|</sup> <sup>2</sup>), and using (7.9.8) one obtains

$$\begin{aligned} &|\Delta(t)^{\frac{1}{2}}(\mathcal{H}(\lambda)h)(t)|^{2} \\ &\leq 2\Big(|\Delta(t)^{\frac{1}{2}}V\_{1}(t,\lambda)|^{2}|g\_{2}(t,\lambda)|^{2}+|\Delta(t)^{\frac{1}{2}}V\_{2}(t,\lambda)|^{2}|g\_{1}(t,\lambda)|^{2}\Big) \\ &\leq 2\Big(|\Delta(t)^{\frac{1}{2}}V\_{1}(t,\lambda)|^{2}\|V\_{2}(\cdot,\lambda)\|\_{t}^{2}\|h\|\_{t}^{2}+|\Delta(t)^{\frac{1}{2}}V\_{2}(t,\lambda)|^{2}\|V\_{1}(\cdot,\lambda)\|\_{t}^{2}\|h\|\_{t}^{2}\Big). \end{aligned}$$

Integration of this inequality over (a, x) and (7.9.5) lead to

$$\begin{split} \|\mathcal{H}(\lambda)h\|\_{x}^{2} \leq & 2 \int\_{a}^{x} \left( |\Delta(t)^{\frac{1}{2}}V\_{1}(t,\lambda)|^{2} \|V\_{2}(\cdot,\lambda)\|\_{t}^{2} \|h\|\_{t}^{2} \right) dt \\ &+ |\Delta(t)^{\frac{1}{2}}V\_{2}(t,\lambda)|^{2} \|V\_{1}(\cdot,\lambda)\|\_{t}^{2} \|h\|\_{t}^{2} \right) dt \\ = & 2 \int\_{a}^{x} \left( \frac{d}{dt} \, \|V\_{1}(\cdot,\lambda)\|\_{t}^{2} \|V\_{2}(\cdot,\lambda)\|\_{t}^{2} \right) \|h\|\_{t}^{2} \, dt \\ \leq & 2 \|h\|\_{x}^{2} \int\_{a}^{x} \left( \frac{d}{dt} \, \|V\_{1}(\cdot,\lambda)\|\_{t}^{2} \|V\_{2}(\cdot,\lambda)\|\_{t}^{2} \right) \, dt, \end{split}$$

which implies the desired result. -

Since the system is assumed to be definite on ı = (a, b), there is a compact subinterval [α, β] such that the system is definite on [α, β] and hence on any interval (a, x) with x>β. This implies that for x>β both functions

$$x \mapsto \|V\_1(\cdot, \lambda)\|\_x \quad \text{and} \quad x \mapsto \|V\_2(\cdot, \lambda)\|\_x$$

have positive values; cf. (7.5.1) and Lemma 7.5.2.

**Lemma 7.9.3.** Let <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> be a fixed number. The function <sup>x</sup> → <sup>ε</sup><sup>τ</sup> (x, ξ) given by

$$\sqrt{2} \varepsilon\_{\tau}(x,\xi) \| V\_1(\cdot,\xi) \|\_{x} \| V\_2(\cdot,\xi) \|\_{x} = 1, \quad x > \beta,$$

is well defined, continuous, nonincreasing, and satisfies

$$\lim\_{x \to b} \varepsilon\_{\tau}(x, \xi) = 0.$$

Proof. It is clear that ε<sup>τ</sup> (x, ξ) > 0 is well defined due to the assumption that x>β. Note that x → V1(·, ξ) x V2(·, ξ) <sup>x</sup> is continuous and nondecreasing. The assumption that b is in the limit-point case implies that not both V1(·, ξ) and <sup>V</sup>2(·, ξ) belong to <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). Thus, the limit result follows. -

The function x → ε<sup>τ</sup> (x, ξ) appears in the estimate in the following theorem.

**Theorem 7.9.4.** Let M<sup>τ</sup> be the Weyl function in (7.9.2) corresponding to the boundary triplet {C, <sup>Γ</sup><sup>τ</sup> <sup>0</sup> , Γ<sup>τ</sup> <sup>1</sup> }. Assume that <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> and let <sup>ε</sup><sup>τ</sup> (x, ξ) be as in Lemma 7.9.3. Then for a<β<x<b

$$\frac{1}{d\_0} \le \frac{||V\_2(\cdot,\xi)||\_x}{||V\_1(\cdot,\xi)||\_x} \left| M\_\tau(\xi + i\varepsilon\_\tau(x,\xi)) \right| \le d\_0,$$

where <sup>d</sup><sup>0</sup> =1+2√2 + 2 + <sup>√</sup><sup>2</sup> .

Proof. Assume that <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> and let ε > 0. Define the function <sup>ψ</sup>(·, ξ, ε) by

$$
\psi(\cdot,\xi,\varepsilon) = V\_1(\cdot,\xi) + M\_\tau(\xi + i\varepsilon)V\_2(\cdot,\xi). \tag{7.9.9}
$$

For any a<x<b this leads to

$$\left| \| |V\_2(\cdot, \xi)| \|\_{x} |M\_{\tau}(\xi + i\varepsilon)| - \| |V\_1(\cdot, \xi)| \|\_{x} \right| \le \| \psi(\cdot, \xi, \varepsilon) \|\_{x}$$

or, equivalently, when β<x<a

$$\left| \frac{\|V\_2(\cdot,\xi)\|\_x}{\|V\_1(\cdot,\xi)\|\_x} |M\_\tau(\xi+i\varepsilon)| - 1 \right| \le \frac{\|\psi(\cdot,\xi,\varepsilon)\|\_x}{\|V\_1(\cdot,\xi)\|\_x}.\tag{7.9.10}$$

The term on the right-hand side of (7.9.10) will now be estimated in a suitable way. First note that it follows from (7.9.6) that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> and <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one obtains

$$V\_1(\cdot,\lambda) + M\_\tau(\mu)V\_2(\cdot,\lambda) - \gamma\_\tau(\mu) = (\lambda - \mu)\Re(\lambda)\gamma\_\tau(\cdot,\mu). \tag{7.9.11}$$

Applying the identity in (7.9.11) with λ = ξ and μ = ξ + iε one sees that

$$
\psi(\cdot,\xi,\varepsilon) = \gamma\_\tau(\cdot,\xi+i\varepsilon) - i\varepsilon \mathcal{H}(\xi)\gamma\_\tau(\cdot,\xi+i\varepsilon),
$$

which expresses the function ψ(·, ξ, ε) in (7.9.9) in terms of the γ-field γ<sup>τ</sup> . Hence, it follows from Lemma 7.9.2 that

$$\|\psi(\cdot,\xi,\varepsilon)\|\_{x} \le \left(1+\sqrt{2}\,\varepsilon\,\|\,V\_1(\cdot,\xi)\|\_{x}\|\,V\_2(\cdot,\xi)\|\_{x}\right)\|\gamma\_\tau(\cdot,\xi+i\varepsilon)\|\_{x}.$$

Therefore, the right-hand side of (7.9.10) is estimated by

$$\begin{split} & \frac{\left(1+\sqrt{2}\,\varepsilon\,\|\,|V\_{1}(\cdot,\xi)\|\,\_{x}\|\,|V\_{2}(\cdot,\xi)\|\,\_{x}\right)\|\,\_{\tau}\langle\cdot,\xi+i\varepsilon\rangle\|\,\_{x}}{\|\,V\_{1}(\cdot,\xi)\|\,\_{x}} \\ & \qquad = \frac{1+\sqrt{2}\,\varepsilon\,\|\,V\_{1}(\cdot,\xi)\|\,\_{x}\|\,V\_{2}(\cdot,\xi)\|\,\_{x}}{(\|\,V\_{1}(\cdot,\xi)\|\,\_{x}\|\,V\_{2}(\cdot,\xi)\|\,\_{x})^{\frac{1}{2}}} \frac{\|\,V\_{2}(\cdot,\xi)\|\,\_{x}^{\frac{1}{2}}}{\|\,V\_{1}(\cdot,\xi)\|\,\_{x}^{\frac{1}{2}}} \|\,\gamma\_{\tau}(\cdot,\xi+i\varepsilon)\|\,\_{x}. \end{split}$$

Now observe that γ<sup>τ</sup> (·, ξ +iε) x ≤ γ<sup>τ</sup> (·, ξ +iε) <sup>b</sup> and it follows from (7.9.3) that

$$\|\gamma\_{\tau}(\cdot,\xi+i\varepsilon)\|\_{b}\leq\sqrt{\frac{\mathrm{Im}\,M\_{\tau}(\xi+i\varepsilon)}{\varepsilon}}\leq\sqrt{\frac{|M\_{\tau}(\xi+i\varepsilon)|}{\varepsilon}}.$$

Thus, for any ε > 0 and β<x<b one obtains the inequality

$$\begin{split} & \left| \frac{\|V\_{2}(\cdot,\xi)\|\_{x}}{\|V\_{1}(\cdot,\xi)\|\_{x}} |M\_{\tau}(\xi+i\varepsilon)|-1 \right| \\ & \leq \frac{1+\sqrt{2}\varepsilon \, \|V\_{1}(\cdot,\xi)\|\_{x} \|V\_{2}(\cdot,\xi)\|\_{x}}{(\varepsilon \, \|V\_{1}(\cdot,\xi)\|\_{x} \|V\_{2}(\cdot,\xi)\|\_{x})^{\frac{1}{2}}} \left( \frac{\|V\_{2}(\cdot,\xi)\|\_{x}}{\|V\_{1}(\cdot,\xi)\|\_{x}} \, |M\_{\tau}(\xi+i\varepsilon)| \right)^{\frac{1}{2}}. \end{split}$$

Now for <sup>ξ</sup> <sup>∈</sup> <sup>R</sup> and β<x<b choose <sup>ε</sup> <sup>=</sup> <sup>ε</sup><sup>τ</sup> (x, ξ) in this estimate. This choice minimizes the first factor on the right-hand side to 25/4. Hence, the nonnegative quantity

$$Q = \frac{||V\_2(\cdot,\xi)||\_x}{||V\_1(\cdot,\xi)||\_x}|M\_\tau(\xi + i\varepsilon\_\tau(x,\xi))|$$

satisfies the inequality

$$|Q - 1| \le 2^{5/4} Q^{\frac{1}{2}}$$

or, equivalently, <sup>Q</sup><sup>2</sup> <sup>−</sup>2Q+ 1 <sup>≤</sup> <sup>4</sup> <sup>√</sup>2Q. Therefore, 1/d<sup>0</sup> <sup>≤</sup> <sup>Q</sup> <sup>≤</sup> <sup>d</sup>0, which completes the proof. -

The following result is now a direct consequence of Theorem 7.9.4.

**Theorem 7.9.5.** Let M be the Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} and let <sup>ξ</sup> <sup>∈</sup> <sup>R</sup>. Then the following statements hold:

(i) If <sup>τ</sup> <sup>∈</sup> <sup>R</sup>, then the solution <sup>Y</sup>1(·, ξ) + τY2(·, ξ) of the boundary value problem

$$Jf' - Hf = \xi \Delta f, \quad f\_2(a) = \tau f\_1(a),$$

which is unique up to scalar multiples, is subordinate if and only if

$$\lim\_{\varepsilon \downarrow 0} M(\xi + i\varepsilon) = \tau.$$

(ii) If τ = ∞, then the solution Y2(·, ξ) of the boundary value problem

$$Jf' - Hf = \xi \Delta f, \quad f\_1(a) = 0,$$

which is unique up to scalar multiples, is subordinate if and only if

$$\lim\_{\varepsilon \downarrow 0} M(\xi + i\varepsilon) = \infty.$$

Proof. Since x → ε<sup>τ</sup> (x, ξ) is continuous, nonincreasing, and has limit 0 as x → b, one obtains the identity

$$\lim\_{\varepsilon \downarrow 0} M\_{\tau}(\xi + i\varepsilon) = \lim\_{x \to b} M\_{\tau}(\xi + i\varepsilon\_{\tau}(x, \xi)).$$

(i) Assume that <sup>τ</sup> <sup>∈</sup> <sup>R</sup> and note that

$$V\_2(\cdot,\xi) = \frac{1}{\sqrt{\tau^2 + 1}} \left( Y\_1(\cdot,\xi) + \tau Y\_2(\cdot,\xi) \right),$$

by (7.9.4). It will be shown that |M<sup>τ</sup> (ξ + iε)|→∞ for ε ↓ 0 if and only if the solution V2(·, ξ) is subordinate. To see this, assume first that |M<sup>τ</sup> (ξ + iε)|→∞. Then, by Theorem 7.9.4, it follows that

$$\lim\_{x \to b} \frac{\|V\_2(\cdot, \xi)\|\_x}{\|V\_1(\cdot, \xi)\|\_x} = 0. \tag{7.9.12}$$

Hence, for any <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>R</sup>, <sup>c</sup><sup>1</sup> = 0, one obtains from (7.9.12) that

$$\lim\_{x \to b} \frac{\|V\_2(\cdot, \xi)\|\_x}{\|c\_1 V\_1(\cdot, \xi) + c\_2 V\_2(\cdot, \xi)\|\_x} = 0,\tag{7.9.13}$$

and therefore the solution V2(·, ξ) is subordinate. Conversely, assume that V2(·, ξ) is subordinate, so that (7.9.13) holds for all <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>R</sup>, <sup>c</sup><sup>1</sup> = 0. Then clearly (7.9.12) holds, and therefore it follows from Theorem 7.9.4 that |M<sup>τ</sup> (ξ + iε)|→∞.

It is a consequence of (7.9.2) that for ε ↓ 0 one has

$$|M\_{\tau}(\xi + i\varepsilon)| \to \infty \quad \Leftrightarrow \quad M(\xi + i\varepsilon) \to \tau.$$

This equivalence leads to the assertion for <sup>τ</sup> <sup>∈</sup> <sup>R</sup>.

(ii) The case <sup>τ</sup> <sup>=</sup> <sup>∞</sup> can be treated in the same way as (i). -

The self-adjoint extension A<sup>0</sup> = ker Γ<sup>0</sup> of Tmin is given by

$$A\_0 = \left\{ \{f, g\} \in T\_{\text{max}} \, : \, f\_1(a) = 0 \right\};\tag{7.9.14}$$

cf. (7.9.1). The boundary condition

$$f\_1(a) = 0\tag{7.9.15}$$

plays a central role in the following definition, which is based on Theorem 7.9.5; cf. Definition 6.7.6.

**Definition 7.9.6.** With the canonical system Jf-<sup>−</sup>Hf <sup>=</sup> <sup>ξ</sup>Δf, <sup>ξ</sup> <sup>∈</sup> <sup>R</sup>, the following subsets of R are associated:


It is a direct consequence of Definition 7.9.6 that

$$
\mathbb{R} = \mathcal{M}^c \sqcup \mathcal{M}\_{\mathrm{ac}} \sqcup \mathcal{M}\_{\mathrm{s}}, \quad \mathbb{M} = \mathcal{M}\_{\mathrm{ac}} \sqcup \mathcal{M}\_{\mathrm{s}}, \quad \text{and} \quad \mathcal{M}\_{\mathrm{s}} = \mathcal{M}\_{\mathrm{sc}} \sqcup \mathcal{M}\_{\mathrm{P}},
$$

where stands for disjoint union.

Let the Weyl function <sup>M</sup> of the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} have the integral representation

$$M(\lambda) = \alpha + \beta \lambda + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) d\sigma(t), \tag{7.9.16}$$

where <sup>α</sup> <sup>∈</sup> <sup>R</sup>, <sup>β</sup> <sup>≥</sup> 0, and the measure <sup>σ</sup> satisfies

$$\int\_{\mathbb{R}} \frac{1}{t^2 + 1} \, d\sigma(t) < \infty;$$

cf. Theorem A.2.5. The following proposition is based on Corollary 3.1.8, where minimal supports for the various parts of the measure σ in the integral representation of M are described in terms of the boundary behavior of the Nevanlinna function M. The proof of Proposition 7.9.7 is the same as the proof of Proposition 6.7.7 and will not be repeated.

**Proposition 7.9.7.** Let M be the Weyl function associated with the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} and let <sup>σ</sup> be the corresponding measure in (7.9.16). Then the sets

$$
\mathcal{M}, \mathcal{M}\_{\mathrm{ac}}, \mathcal{M}\_{\mathrm{s}}, \mathcal{M}\_{\mathrm{sc}}, \mathcal{M}\_{\mathrm{sc}}, \mathcal{M}\_{\mathrm{p}},
$$

are minimal supports for the measures

$$\sigma, \sigma\_{\rm ac}, \sigma\_{\rm s}, \sigma\_{\rm sc}, \sigma\_{\rm p},$$

respectively.

The minimal supports in Proposition 7.9.7 are intimately connected with the spectrum of A0. For the absolutely continuous spectrum one obtains in the same way as in Theorem 6.7.8 the following result, where the notion of the absolutely continuous closure of a Borel set from Definition 3.2.4 is used. Similar statements (with an inclusion) can be formulated for the singular parts of the spectrum; cf. Section 3.6.

**Theorem 7.9.8.** Let A<sup>0</sup> be the self-adjoint relation in (7.9.14) and let Mac be as in Definition 7.9.6. Then

$$
\sigma\_{\rm ac}(A\_0) = \operatorname{clos}\_{\rm ac}(\mathfrak{M}\_{\rm ac})\,.
$$

## **7.10 Special classes of canonical systems**

In this section two particular types of canonical systems are studied. First it is shown how a class of Sturm–Liouville problems, which are slightly more general than the equations treated in Chapter 6, fit in the framework of canonical systems. In this context the results from the previous sections can be carried over to Sturm– Liouville equations. The second class of canonical systems which is discussed here consists of systems of the form (7.2.3) with H = 0. In this situation a simple limit-point/limit-circle criterion is provided.

#### **Weighted Sturm–Liouville equations**

Let <sup>ı</sup> <sup>⊂</sup> <sup>R</sup> be an open interval. Let 1/p, q, r, s <sup>∈</sup> <sup>L</sup><sup>1</sup> loc (ı) be real functions, assume r(t) ≥ 0 for almost all t ∈ ı, and define the 2 × 2 matrix functions H and Δ by

$$H(t) = \begin{pmatrix} -q(t) & -s(t) \\ -s(t) & 1/p(t) \end{pmatrix} \quad \text{and} \quad \Delta(t) = \begin{pmatrix} r(t) & 0 \\ 0 & 0 \end{pmatrix},\tag{7.10.1}$$

respectively. Let the 2 × 2 matrix J be as in (7.2.2). If the vector functions f and g satisfy the canonical system Jf- − Hf = Δg, then their first components f = f<sup>1</sup> and g = g<sup>1</sup> satisfy the weighted Sturm–Liouville equation

$$-\left(\mathfrak{f}^{[1]}\right)' + s\mathfrak{f}^{[1]} + q\mathfrak{f} = r\mathfrak{g}, \quad \text{where} \quad \mathfrak{f}^{[1]} = p(\mathfrak{f}' + s\mathfrak{f}), \tag{7.10.2}$$

and, since f ∈ AC(ı), it follows that f, f [1] <sup>∈</sup> AC(ı). Conversely, if <sup>f</sup>, <sup>f</sup> [1] <sup>∈</sup> AC(ı) and f, g satisfy (7.10.2), then the vector functions

$$f = \begin{pmatrix} f \\ \mathfrak{f}^{[1]} \end{pmatrix} \quad \text{and} \quad g = \begin{pmatrix} \mathfrak{g} \\ 0 \end{pmatrix},$$

satisfy the canonical system (7.2.3) with the coefficients given by (7.10.1). Since the functions 1/p, q, r, s are assumed to be real, the system Jf- − Hf = Δg is real. Moreover, the canonical system corresponding to (7.10.1) is definite, when any solution f of Jf-− Hf = 0 which satisfies Δf = 0 vanishes or, equivalently,

$$-f\_2' + qf\_1 + sf\_2 = 0, \quad f\_1' + sf\_1 - (1/p)f\_2 = 0, \quad rf\_1 = 0 \quad \Rightarrow \quad f = 0.$$

In accordance with the definition of definiteness for canonical equations, the Sturm–Liouville equation (7.10.2) is said to be definite if

$$-(f^{[1]})' + sf^{[1]} + q\mathfrak{f} = 0, \quad r\mathfrak{f} = 0 \quad \Rightarrow \quad \mathfrak{f} = 0.1$$

In particular, the Sturm–Liouville equation (7.10.2) is definite if the weight function r is positive on an open interval.

The matrix function Δ in (7.10.1) induces the spaces L<sup>2</sup> <sup>Δ</sup>(ı) and L<sup>2</sup> <sup>Δ</sup>(ı). With the weight r it is natural to introduce the space L<sup>2</sup> <sup>r</sup>(ı) of all complex measurable functions ϕ for which

$$\int\_{\mathfrak{a}} |\varphi(s)|^2 r(s) \, ds = \int\_{\mathfrak{a}} \varphi(s)^\* r(s) \varphi(s) \, ds < \infty.$$

The corresponding semi-inner product is denoted by (·, ·)<sup>r</sup> and the corresponding Hilbert space of equivalence classes of elements from L<sup>2</sup> <sup>r</sup>(ı) is denoted by L<sup>2</sup> <sup>r</sup>(ı). It is clear that the mapping R defined by

$$f = \begin{pmatrix} f\_1 \\ f\_2 \end{pmatrix} \in \mathcal{L}^2\_{\Delta}(\iota) \mapsto f\_1 \in \mathcal{L}^2\_r(\iota).$$

is an isometry with respect to the semi-inner products, thanks to the identity

$$(f,f)\_{\Delta} = \int\_{\mathfrak{u}} \begin{pmatrix} f\_1(s)^\* \ f\_2(s)^\* \end{pmatrix} \begin{pmatrix} r(s) & 0\\ 0 & 0 \end{pmatrix} \begin{pmatrix} f\_1(s)\\ f\_2(s) \end{pmatrix} \, ds = (f\_1, f\_1)\_r.$$

Furthermore, this mapping is onto, since each function in L<sup>2</sup> <sup>r</sup>(ı) can be regarded as the first component of an element in L<sup>2</sup> <sup>Δ</sup>(ı) with the understanding that the second component can be any measurable function. Therefore, the mapping R induces a unitary operator, again denoted by R, from L<sup>2</sup> <sup>Δ</sup>(ı) onto L<sup>2</sup> <sup>r</sup>(ı).

Assume now that the system or, equivalently, the Sturm–Liouville equation is definite. In the Hilbert space L<sup>2</sup> <sup>Δ</sup>(ı) there are the preminimal relation T0, minimal relation Tmin , and maximal relation Tmax associated with the canonical system Jf-− Hf = Δg:

$$T\_0 \subsetneq T\_0 = T\_{\min} \subsetneq T\_{\max} = (T\_{\min})^\*.$$

Likewise, one can define corresponding relations in the Hilbert space L<sup>2</sup> <sup>r</sup>(ı). The maximal relation Tmax is defined as follows:

$$\mathcal{T}\_{\max} = \left\{ \{ \mathfrak{f}, \mathfrak{g} \} \in L^2\_r(\mathfrak{\iota}) \times L^2\_r(\mathfrak{\iota}) \, : \, -(p\mathfrak{f}^{[1]})' + s\mathfrak{f}^{[1]} + q\mathfrak{f} = r\mathfrak{g} \right\},$$

in the sense that there exist representatives <sup>f</sup> and <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(ı) of f and g, respectively, such that f ∈ AC(ı), pf [1] <sup>∈</sup> AC(ı), and (7.10.2) holds. It is clear that the definiteness of the canonical system or, equivalently, of the equation (7.10.2) implies that each element f ∈ dom Tmax has a unique representative f such that f ∈ AC(ı), pf [1] <sup>∈</sup> AC(ı); cf. Lemma 7.6.1. The preminimal relation <sup>T</sup><sup>0</sup> and the minimal relation Tmin are defined by

T<sup>0</sup> = - {f, g} ∈ Tmax : f has compact support and Tmin = T0.

It is not difficult to see that the mapping <sup>R</sup> defined by

$$
\dot{R}\{f,g\} = \{Rf, Rg\}, \quad \{f,g\} \in L^2\_{\Delta}(\iota) \times L^2\_{\Delta}(\iota),
$$

takes Tmax one-to-one onto Tmax , including absolutely continuous representatives, and that with {f, <sup>g</sup>} <sup>=</sup> <sup>R</sup>{f,g} and {h,k} <sup>=</sup> <sup>R</sup>{h, k}:

$$(g,h)\_{\Delta} - (f,k)\_{\Delta} = (\mathfrak{g},\mathfrak{h})\_r - (\mathfrak{f},\mathfrak{k})\_r, \quad \{f,g\}, \{h,k\} \in T\_{\text{max}}.\tag{7.10.3}$$

Similarly, <sup>R</sup> takes <sup>T</sup><sup>0</sup> one-to-one onto <sup>T</sup><sup>0</sup> and hence <sup>R</sup> takes <sup>T</sup>min one-to-one onto Tmin .

In the Hilbert space L<sup>2</sup> <sup>r</sup>(ı) the relations T0, Tmin , and Tmax associated with the Sturm–Liouville equation (7.10.2) satisfy

$$\mathcal{T}\_0 \subset \mathcal{T}\_0 = \mathcal{T}\_{\min} \subset \mathcal{T}\_{\max} = (\mathcal{T}\_{\min})^\*.$$

Furthermore, R maps ker (Tmax − λ) one-to-one onto ker (Tmax − λ). Since the functions p, q, s, and r are real, it follows that the defect numbers of Tmin and Tmin are equal. Let {G, Γ0, Γ1} be a boundary triplet for Tmax and let Tmax be the corresponding maximal relation for the Sturm–Liouville operator. Then the mappings Γ- <sup>0</sup> and Γ- <sup>1</sup> from Tmax to G given by

$$
\Gamma\_0' \{ \mathfrak{f}, \mathfrak{g} \} = \Gamma\_0 \{ f, g \} \quad \text{and} \quad \Gamma\_1' \{ \mathfrak{f}, \mathfrak{g} \} = \Gamma\_1 \{ f, g \}, \quad \{ \mathfrak{f}, \mathfrak{g} \} = \dot{R} \{ f, g \}, \tag{7.10.4}
$$

form a boundary triplet for Tmax ; cf. (7.10.3). The boundary triplet {G, Γ0, Γ1} and the one in (7.10.4) have the same Weyl function.

Via the above identification, the discussion and the results for canonical systems with regular, quasiregular, and singular endpoints in Section 7.7 and Section 7.8 remain valid for weighted Sturm–Liouville equations of the form (7.10.2). Note that in the special case where s(t) = 0 and r(t) > 0 for almost all t ∈ ı the Sturm–Liouville expression (7.10.2) coincides with the Sturm–Liouville expression studied in Chapter 6.

#### **Special canonical systems**

This subsection is devoted to the special class of canonical differential equations which have the form

$$Jf' = \lambda \Delta f + \Delta g\tag{7.10.5}$$

on an open interval ı = (a, b), i.e., the class of canonical systems of the form (7.2.3) with H = 0. It will be assumed that the system is real and definite on <sup>ı</sup>. Here definiteness means that the identity Δ(t)<sup>e</sup> = 0 for some <sup>e</sup> <sup>∈</sup> <sup>C</sup><sup>2</sup> and all t ∈ ı implies e = 0. Note that Lemma 7.2.5 shows that any real definite canonical system of the form (7.2.3) can be transformed into the form (7.10.5) with a possible real shift of the eigenvalue parameter. For this class of equations the limit-point and limit-circle classification at an endpoint can be characterized in terms of the integrability of the function Δ.

**Theorem 7.10.1.** Let the canonical system (7.10.5) be real and definite, and let the endpoint a be regular. Then there is the alternative:


Proof. By assumption, the canonical system (7.10.5) is real and definite, and the endpoint a is regular. If Δ is integrable on ı, then b is quasiregular, which implies that b is in the limit-circle case; see Corollary 7.4.6. Therefore, it suffices to show that if Δ not integrable, then b is in the limit-point case. Hence, assume that Δ is not integrable at b, so that

$$\infty = \int\_{a}^{b} |\Delta(s)| \, ds \le \int\_{a}^{b} \text{tr}\, \Delta(s) \, ds,\tag{7.10.6}$$

where the estimate |Δ(s)| ≤ tr Δ(s) follows from (7.1.6). In order to show that the endpoint b is in the limit-point case one must verify that

$$\lim\_{t \to b} h(t)^\* Jf(t) = 0$$

for all {f,g}, {h, k} ∈ Tmax ; cf. Lemma 7.6.8. Since the limit on the left-hand side exists due to Lemma 7.6.4, it suffices to verify the weaker statement

$$\liminf\_{t \to b} h(t)^\* Jf(t) = 0 \tag{7.10.7}$$

for all {f,g}, {h, k} ∈ Tmax . According to Corollary 7.6.6 and the von Neumann formula in Theorem 1.7.11, it then suffices to prove (7.10.7) for elements of the form f = u<sup>f</sup> + v<sup>f</sup> and h = u<sup>h</sup> + vh, where

$$\{\{u\_f,\lambda u\_f\},\{u\_h,\lambda u\_h\}\in T\_{\text{max}}\quad\text{and}\quad\{v\_f,\mu v\_f\},\{v\_h,\mu v\_h\}\in T\_{\text{max}}$$

with λ and μ in different half-planes. Finally, by polarization, it is clearly sufficient to show that

$$\liminf\_{t \to b} \, f(t)^\* Jf(t) = 0 \tag{7.10.8}$$

for f = u + v, where {u, λu}, {v, μv} ∈ Tmax . The proof of (7.10.8) is carried out in five steps.

Step 1. Let u be a solution of the homogeneous equation Jy- = λΔy which satisfies <sup>u</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı). Then

$$u(t) = u(a) + \int\_{a}^{t} J^{-1} \lambda \Delta(s) u(s) \, ds. \tag{7.10.9}$$

Hence, it follows from (7.10.9) as in Lemma 7.1.4 that

$$\begin{aligned} |u(t)| &\le |u(a)| + |\lambda| \int\_a^t |\Delta(s)u(s)| \, ds \\ &\le |u(a)| + |\lambda| \left( \int\_a^t |\Delta(s)| \, ds \right)^{\frac{1}{2}} \left( \int\_a^t |\Delta(s)^{\frac{1}{2}} u(s)|^2 \, ds \right)^{\frac{1}{2}} \\ &\le |u(a)| + |\lambda| \left( \int\_a^t |\Delta(s)| \, ds \right)^{\frac{1}{2}} \|u\|\_{\Delta} . \end{aligned}$$

Due to the estimate |Δ(s)| ≤ tr Δ(s) one obtains

$$|u(t)| \le |u(a)| + |\lambda| \left( \int\_a^t \text{tr}\,\Delta(s) \, ds \right)^{\frac{1}{2}} ||u||\_{\Delta}. \tag{7.10.10}$$

It is clear that for a solution v of the homogeneous equation Jy- = μΔy which satisfies <sup>v</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) one obtains the similar inequality

$$|v(t)| \le |v(a)| + |\mu| \left( \int\_a^t \text{tr}\,\Delta(s) \, ds \right)^{\frac{1}{2}} ||v||\_{\Delta}. \tag{7.10.11}$$

Step 2. Let u and v be as in Step 1 with λ and μ in different half-planes. Due to the assumption <sup>b</sup> <sup>a</sup> tr Δ(s) ds = ∞ in (7.10.6), one can choose t<sup>0</sup> > a so large that <sup>t</sup> <sup>a</sup> tr Δ(s) ds ≥ 1 for t ≥ t0. Then it follows from the estimates (7.10.10) and (7.10.11) that

$$|u(t)| \le \left(\int\_a^t \operatorname{tr}\Delta(s) \, ds\right)^{\frac{1}{2}} \left(|u(a)| + |\lambda| \, \|u\|\_{\Delta}\right), \quad t \ge t\_0,$$

$$|v(t)| \le \left(\int\_a^t \operatorname{tr}\Delta(s) \, ds\right)^{\frac{1}{2}} \left(|v(a)| + |\mu| \, \|v\|\_{\Delta}\right), \quad t \ge t\_0.$$

Consequently, for f = u + v,

$$|f(t)| \le C\_f \left( \int\_a^t \text{tr}\,\Delta(s) \, ds \right)^{\frac{1}{2}}, \quad t \ge t\_0,\tag{7.10.12}$$

where C<sup>f</sup> = |u(a)| + |λ| u <sup>Δ</sup> + |v(a)| + |μ| v Δ.

Step 3. Define the 2 × 2 matrix function Δ<sup>0</sup> by

$$\Delta\_0(t) = \begin{cases} (\operatorname{tr}\Delta(t))^{-1}\Delta(t), & \operatorname{tr}\Delta(t) \neq 0, \\ \frac{1}{2}I, & \operatorname{tr}\Delta(t) = 0, \end{cases}$$

for almost every t ∈ ı. Since tr Δ(t) = 0 implies Δ(t) = 0 (see (7.1.6)), one has Δ(t) = (tr Δ(t))Δ0(t). Then Δ<sup>0</sup> is nonnegative and

$$
\Delta\_0 = \begin{pmatrix} \alpha & \beta \\ \beta & \delta \end{pmatrix},
$$

where the functions α and δ are nonnegative with α + δ = 1, and the function β is real. Define the matrix function Δ<sup>1</sup> by

$$\begin{aligned} \Delta\_1 &= \begin{pmatrix} (\text{sgn}\,\beta)\alpha^{\frac{1}{2}} & \delta^{\frac{1}{2}} \end{pmatrix}^\* \begin{pmatrix} (\text{sgn}\,\beta)\alpha^{\frac{1}{2}} & \delta^{\frac{1}{2}} \end{pmatrix} \\ &= \begin{pmatrix} \alpha & (\text{sgn}\,\beta)\alpha^{\frac{1}{2}}\delta^{\frac{1}{2}} \\ (\text{sgn}\,\beta)\alpha^{\frac{1}{2}}\delta^{\frac{1}{2}} & \delta \end{pmatrix}; \end{aligned} \tag{7.10.13}$$

then Δ<sup>1</sup> is nonnegative. Moreover, since α <sup>1</sup> 2 δ 1 <sup>2</sup> ≥ (sgn β)β it follows that the matrix

$$2\Delta\_0 - \Delta\_1 = \begin{pmatrix} \alpha & 2\beta - (\text{sgn } \beta)\alpha^{\frac{1}{2}}\delta^{\frac{1}{2}} \\ 2\beta - (\text{sgn } \beta)\alpha^{\frac{1}{2}}\delta^{\frac{1}{2}} & \delta \end{pmatrix}$$

is nonnegative. Therefore, Δ1(t) ≤ 2Δ0(t) and one has the estimate

$$2(\operatorname{tr}\Delta(t))\Delta\_1(t) \le 2(\operatorname{tr}\Delta(t))\Delta\_0(t) = 2\Delta(t) \tag{7.10.14}$$

for almost every t ∈ ı.

Step 4. It will be shown that

$$\liminf\_{t \to b} \left[ f(t)^\* \Delta\_1(t) f(t) \int\_a^t \text{tr}\,\Delta(s) \, ds \right] = 0 \tag{7.10.15}$$

for <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) and, in particular, for f = u + v as in Step 2. In fact, assume that (7.10.15) does not hold. Then there exist a<a- < b and ε > 0 such that for all t ≥ a-

$$\varepsilon \le f(t)^\* \Delta\_1(t) f(t) \int\_a^t \text{tr}\, \Delta(s) \, ds$$

or, equivalently,

$$\varepsilon \frac{\text{tr}\,\Delta(t)}{\int\_{a}^{t} \text{tr}\,\Delta(s) \,ds} \leq f(t)^{\*} \Delta\_{1}(t) f(t) \,\text{tr}\,\Delta(t). \tag{7.10.16}$$

Integration of the right-hand side of (7.10.16) together with (7.10.14) lead to

$$\int\_{a'}^{b} f(t)^\* \Delta\_1(t) f(t) \operatorname{tr} \Delta(t) \, dt \le 2 \int\_{a'}^{b} f(t)^\* \Delta(t) f(t) \, dt < \infty,$$

while integration of the left-hand side of (7.10.16) gives

$$\varepsilon \int\_{a'}^{b} \frac{\operatorname{tr} \Delta(t)}{\int\_{a}^{t} \operatorname{tr} \Delta(s) \, ds} \, dt = \varepsilon \int\_{a'}^{b} \frac{d}{dt} \left( \log \int\_{a}^{t} \operatorname{tr} \Delta(s) \, ds \right) dt = \infty, \varepsilon$$

due to (7.10.6). This contradiction shows that (7.10.15) is valid.

Step 5. It will be shown that for f = u + v as in Step 2 the limit in (7.10.15) implies the limit in (7.10.8). It is helpful to introduce the notation

$$
\varphi = \begin{pmatrix} (\operatorname{sgn}\beta)\alpha^{\frac{1}{2}} & \delta^{\frac{1}{2}} \end{pmatrix} \begin{pmatrix} f\_1 \\ f\_2 \end{pmatrix},
$$

so that |ϕ| <sup>2</sup> = f <sup>∗</sup>Δ1f; cf. (7.10.13). Then the limit result (7.10.15) can be written as

$$\liminf\_{t \to b} \left[ |\varphi(t)|^2 \int\_a^t \text{tr}\,\Delta(s) \,ds \right] = 0. \tag{7.10.17}$$
  $t^\* \text{ } I \text{ : } (7.10.8) \text{ is given by}$ 

Observe that the term f <sup>∗</sup>Jf in (7.10.8) is given by

$$f^\* Jf = 2i \text{Im} \left( \overline{f}\_2 f\_1 \right). \tag{7.10.18}$$

To estimate the term |Im (f <sup>2</sup>f1)| note that, by the definition of the function ϕ,

$$
\overline{f}\_1 \varphi = (\text{sgn}\,\beta) \alpha^{\frac{1}{2}} \overline{f}\_1 f\_1 + \delta^{\frac{1}{2}} \overline{f}\_1 f\_2 \quad \text{and} \quad \overline{f}\_2 \varphi = (\text{sgn}\,\beta) \alpha^{\frac{1}{2}} \overline{f}\_2 f\_1 + \delta^{\frac{1}{2}} \overline{f}\_2 f\_2.
$$

This yields the identities

$$\operatorname{Im}\left(\overline{f}\_1\varphi\right) = \delta^{\frac{1}{2}}\operatorname{Im}\left(\overline{f}\_1f\_2\right) \quad \text{and} \quad \operatorname{Im}\left(\overline{f}\_2\varphi\right) = (\operatorname{sgn}\beta)\alpha^{\frac{1}{2}}\operatorname{Im}\left(\overline{f}\_2f\_1\right).$$

Therefore, it is clear that

$$\delta^{\frac{1}{2}} |\mathrm{Im}\left(\overline{f}\_1 f\_2\right)| = |\mathrm{Im}\left(\overline{f}\_1 \varphi\right)| \le |f\_1| \, |\varphi|\,\tag{7.10.19}$$

and

$$\alpha^{\frac{1}{2}} |\mathrm{Im}\left(\overline{f}\_2 f\_1\right)| = |\mathrm{Im}\left(\overline{f}\_2 \varphi\right)| \le |f\_2| \, |\varphi|. \tag{7.10.20}$$

Since α + δ = 1, one has α < <sup>1</sup> <sup>2</sup> if and only if <sup>δ</sup> <sup>≥</sup> <sup>1</sup> <sup>2</sup> . Note that <sup>x</sup> <sup>≥</sup> <sup>1</sup> <sup>2</sup> if and only if 1/ <sup>√</sup><sup>x</sup> <sup>≤</sup> <sup>√</sup>2, so it follows from (7.10.18) and (7.10.19)–(7.10.20) that

$$|f^\*Jf| \le \begin{cases} 2\sqrt{2}|\varphi||f\_2|, & \alpha \ge \frac{1}{2}, \\ 2\sqrt{2}|\varphi||f\_1|, & \delta \ge \frac{1}{2}. \end{cases}$$

Therefore, if t ≥ t0, then (7.10.12) implies that

$$|f(t)^\* J f(t)| \le 2\sqrt{2} \, C\_f |\varphi(t)| \left( \int\_a^t \text{tr}\,\Delta(s) \, ds \right)^{\frac{1}{2}}, \quad t \ge t\_0.$$

Combined with (7.10.17) this shows that (7.10.8) is satisfied. -

The next corollary follows from Theorem 7.10.1 and (7.1.6).

**Corollary 7.10.2.** Let the canonical system (7.10.5) be real and definite, and let the endpoint a be regular. Assume that Δ is trace-normed in the sense that tr Δ = 1. Then the following alternative holds:


Next two simple examples for trace-normed canonical systems are discussed.

**Example 7.10.3.** Let ı = (−1, 1) and define the matrix function Δ by

$$
\Delta(t) = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad t \in (-1, 0), \qquad \Delta(t) = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \quad t \in (0, 1).
$$

A measurable function f = (f1, f2) belongs to L<sup>2</sup> <sup>Δ</sup>(ı) if and only if

$$\int\_{-1}^{0} |f\_1(t)|^2 \, dt < \infty \quad \text{and} \quad \int\_{0}^{1} |f\_2(t)|^2 \, dt < \infty,$$

and for f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) the semi-inner product is given by

$$(f,g)\_{\Delta} = \int\_{-1}^{0} \overline{g}\_1(t) f\_1(t) \, dt + \int\_{0}^{1} \overline{g}\_2(t) f\_2(t) \, dt.$$

Hence, an element <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) has Δ-norm 0 if and only if

$$
\begin{pmatrix} f\_1(t) \\ f\_2(t) \end{pmatrix} = \begin{pmatrix} 0 \\ f\_2(t) \end{pmatrix} \qquad \text{for a.e. } t \in (-1, 0).
$$

and

$$
\begin{pmatrix} f\_1(t) \\ f\_2(t) \end{pmatrix} = \begin{pmatrix} f\_1(t) \\ 0 \end{pmatrix} \qquad \text{for a.e. } t \in (0,1),
$$

where f<sup>2</sup> on (−1, 0) and f<sup>1</sup> on (0, 1) are completely arbitrary complex measurable functions.

It is straightforward to see that the regular canonical system Jf- = Δg is definite. Hence, in the Hilbert space L<sup>2</sup> <sup>Δ</sup>(ı) the maximal relation

$$T\_{\max} = \left\{ \{f, g\} \in L^2\_{\Delta}(\iota) \times L^2\_{\Delta}(\iota) : Jf' = \Delta g\right\}.$$

is well defined and for each {f,g} ∈ Tmax the equivalence class f contains a unique absolutely continuous representative such that Jf- = Δg; cf. Lemma 7.6.1. In fact, an absolutely continuous function f = (f1, f2) satisfies Jf- = Δ<sup>g</sup> with <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) if and only if

$$\begin{pmatrix} f\_1(t) \\ f\_2(t) \end{pmatrix} = \begin{pmatrix} \gamma\_1 \\ \gamma\_2 + \int\_t^0 g\_1(s) \, ds \end{pmatrix} \qquad \text{for a.e. } t \in (-1, 0)$$

and

$$
\begin{pmatrix} f\_1(t) \\ f\_2(t) \end{pmatrix} = \begin{pmatrix} \gamma\_1 + \int\_0^t g\_2(s) \, ds \\ \gamma\_2 \end{pmatrix} \qquad \text{for a.e. } t \in (0, 1).
$$

for some constants <sup>γ</sup>1, γ<sup>2</sup> <sup>∈</sup> <sup>C</sup>. From the equality

$$\int\_{-1}^{1} \left( f(t) - \begin{pmatrix} \gamma\_1\\\gamma\_2 \end{pmatrix} \right)^\* \Delta(t) \left( f(t) - \begin{pmatrix} \gamma\_1\\\gamma\_2 \end{pmatrix} \right) \, dt = 0$$

it follows that f = (f1, f2) and (γ1, γ2) are in the same equivalence class in L2 <sup>Δ</sup>(ı). Therefore,

$$
\dim\left(\text{dom}\,T\_{\text{max}}\right) = 2,
$$

and the functions ϕ = (1, 0) and ψ = (0, 1) form an orthonormal system in dom Tmax . Furthermore, it follows from the representation of Tmin in Lemma 7.7.1 that {f,g} ∈ Tmin if and only if

$$
\gamma\_1 = 0, \quad \gamma\_2 = 0, \quad \int\_{-1}^0 g\_1(t) \, dt = 0, \quad \int\_0^1 g\_2(t) \, dt = 0.
$$

Hence,

$$\text{dom}\,T\_{\text{min}} = \{0\} \quad \text{and} \quad \text{mult}\,T\_{\text{min}} = (\text{dom}\,T\_{\text{max}})^\perp,$$

and

$$\operatorname{mult} T\_{\max} = (\operatorname{dom} T\_{\min})^\perp = L^2\_\Delta(\imath).$$

The boundary mappings in Theorem 7.7.2 are given by

$$
\Gamma\_0\{f,g\} = \frac{1}{\sqrt{2}} \begin{pmatrix} 2\gamma\_1 + \int\_0^1 g\_2(t) \, dt \\ 2\gamma\_2 + \int\_{-1}^0 g\_1(t) \, dt \end{pmatrix} \quad \text{and} \quad \Gamma\_1\{f,g\} = \frac{1}{\sqrt{2}} \begin{pmatrix} \int\_{-1}^0 g\_1(t) \, dt \\ \int\_0^1 g\_2(t) \, dt \end{pmatrix}.
$$

In order to compute the γ-field and Weyl function corresponding to the boundary triplet {C2, <sup>Γ</sup>0, <sup>Γ</sup>1} fix a fundamental system by <sup>Y</sup> (−1, λ) = <sup>I</sup>. Then

$$Y(t,\lambda) = \begin{pmatrix} 1 & 0\\ -\lambda t - \lambda & 1 \end{pmatrix} \qquad \text{for a.e. } t \in (-1, 0),$$

$$Y(t, \lambda) = \begin{pmatrix} -\lambda^2 t + 1 & \lambda t\\ -\lambda & 1 \end{pmatrix} \qquad \text{for a.e. } t \in (0, 1).$$

Hence, it follows from Theorem 7.7.2 that the γ-field γ and the Weyl function M are given by

$$\gamma(\cdot,\lambda) = Y(\cdot,\lambda)\frac{\sqrt{2}}{4-\lambda^2}\begin{pmatrix} 2 & -\lambda\\ \lambda & 2-\lambda^2 \end{pmatrix} \text{ and } M(\lambda) = \frac{1}{4-\lambda^2}\begin{pmatrix} 2\lambda & -\lambda^2\\ -\lambda^2 & 2\lambda \end{pmatrix}.$$

In particular, the poles of M are {−2, 2} and hence the spectrum of A<sup>0</sup> consists of the eigenvalues 2 and −2 which both have multiplicity 1.

The next example is a variant of Example 7.10.3 in the limit-point case.

**Example 7.10.4.** Let ı = (−1, ∞) and define the matrix function Δ by

$$
\Delta(t) = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad t \in (-1, 0), \qquad \Delta(t) = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \quad t \in (0, \infty).
$$

As in Example 7.10.3, a measurable function f = (f1, f2) belongs to L<sup>2</sup> <sup>Δ</sup>(ı) if and only if

$$\int\_{-1}^{0} |f\_1(t)|^2 \, dt < \infty \quad \text{and} \quad \int\_{0}^{\infty} |f\_2(t)|^2 \, dt < \infty.$$

The semi-inner product and the elements with Δ-norm 0 are as in Example 7.10.3, except that the interval (0, 1) has to be replaced by (0, ∞). Furthermore, the canonical system Jf- = Δg is definite and in the limit-point case; cf. Corollary 7.10.2. Hence, the maximal relation

$$T\_{\max} = \left\{ \{f, g\} \in L^2\_{\Delta}(\iota) \times L^2\_{\Delta}(\iota) : Jf' = \Delta g\right\}.$$

is well defined in L<sup>2</sup> <sup>Δ</sup>(ı). In a similar way as in Example 7.10.3 it follows that an absolutely continuous function <sup>f</sup> = (f1, f2) <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) satisfies Jf- = Δg with <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı) if and only if

$$
\begin{pmatrix} f\_1(t) \\ f\_2(t) \end{pmatrix} = \begin{pmatrix} \gamma\_1 \\ \int\_t^0 g\_1(s) \, ds \end{pmatrix} \qquad \text{for a.e. } t \in (-1, 0)
$$

and

$$
\begin{pmatrix} f\_1(t) \\ f\_2(t) \end{pmatrix} = \begin{pmatrix} \gamma\_1 + \int\_0^t g\_2(s) \, ds \\ 0 \end{pmatrix} \qquad \text{for a.e. } t \in (0, \infty)
$$

hold for some constant <sup>γ</sup><sup>1</sup> <sup>∈</sup> <sup>C</sup>. The functions <sup>f</sup> = (f1, f2) and (γ1, 0) are in the same equivalence class in L<sup>2</sup> <sup>Δ</sup>(ı) and therefore

> dim dom Tmax = 1,

and dom Tmax is spanned by the function ϕ = (1, 0). It follows from Lemma 7.8.1 that {f,g} ∈ Tmin if and only if

$$
\gamma\_1 = 0 \quad \text{and} \quad \int\_{-1}^{0} g\_1(t) \, dt = 0.
$$

Hence, dom Tmin = {0}, and mul Tmin and mul Tmax are related as in Example 7.10.3.

The boundary mappings in Theorem 7.8.2 are given by

$$
\Gamma\_0\{f,g\} = \gamma\_1 \quad \text{and} \quad \Gamma\_1\{f,g\} = \int\_{-1}^0 g\_1(t) \, dt.
$$

To compute the γ-field and Weyl function corresponding to the boundary triplet {C, <sup>Γ</sup>0, <sup>Γ</sup>1} use the fundamental system

$$Y(t,\lambda) = \begin{pmatrix} 1 & 0\\ -\lambda t - \lambda & 1 \end{pmatrix} \qquad \text{for a.e. } t \in (-1, 0),$$

$$Y(t, \lambda) = \begin{pmatrix} -\lambda^2 t + 1 & \lambda t\\ -\lambda & 1 \end{pmatrix} \qquad \text{for a.e. } t \in (0, \infty).$$

Clearly, not both colums of <sup>Y</sup> (·, λ) belong to <sup>L</sup><sup>2</sup> <sup>Δ</sup>(ı), but the function χ(·, λ) given by

$$
\begin{pmatrix} 1 \\ -\lambda t \end{pmatrix} \quad \text{for a.e.} \ t \in (-1, 0), \quad \begin{pmatrix} 1 \\ 0 \end{pmatrix} \quad \text{for a.e.} \ t \in (0, \infty),
$$

belongs to L<sup>2</sup> <sup>Δ</sup>(ı) and satisfies Jχ- (·, λ) = λΔχ(·, λ). Hence, by Theorem 7.8.2, the γ-field γ and the Weyl function M are given by

$$\gamma(\cdot,\lambda) = Y(\cdot,\lambda) \begin{pmatrix} 1 \\ \lambda \end{pmatrix} \quad \text{and} \quad M(\lambda) = \lambda.$$

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 8**

## **Schr¨odinger Operators on Bounded Domains**

For the multi-dimensional Schr¨odinger operator −Δ + V with a bounded real potential <sup>V</sup> on a bounded domain Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> with a <sup>C</sup>2-smooth boundary a boundary triplet and a Weyl function will be constructed. The self-adjoint realizations of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω) and their spectral properties will be investigated. One of the main difficulties here is to provide trace mappings on the domain of the maximal realization, in such a way that the second Green identity remains valid in an appropriate form. It is necessary to introduce and study Sobolev spaces on the domain Ω and its boundary ∂Ω, which will be done in Section 8.2; in this context also the rigged Hilbert spaces from Section 8.1 arise as Sobolev spaces and their duals. The minimal and maximal operators, and the Dirichlet and Neumann trace maps on the maximal domain will be discussed in Section 8.3, and in Section 8.4 a boundary triplet and Weyl function for the maximal operator associated with −Δ + V is provided. The self-adjoint realizations, their spectral properties, and some natural boundary conditions are also discussed in Section 8.4. The class of semibounded self-adjoint realizations of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω) and the corresponding semibounded forms are studied in Section 8.5. For this purpose a boundary pair which is compatible with the boundary triplet in Section 8.4 is provided. Orthogonal couplings of Schr¨odinger operators are treated in Section 8.6 for the model problem in which R<sup>n</sup> decomposes into a bounded C2-domain Ω<sup>+</sup> and an unbounded component Ω<sup>−</sup> <sup>=</sup> <sup>R</sup><sup>n</sup> \ <sup>Ω</sup>+. Finally, in Section 8.7 the more general setting of Schr¨odinger operators on bounded Lipschitz domains is briefly discussed.

## **8.1 Rigged Hilbert spaces**

In this preparatory section the notion of rigged Hilbert spaces or Gelfand triples is briefly recalled. For this, let G and H be Hilbert spaces and assume that G is densely and continuously embedded in H, that is, one has G ⊂ H and the embedding operator ι : G → H is continuous with dense range and kerι = {0}. In the following the dual space H is identified with H, but the dual space G of antilinear continuous functionals is not identified with G. Instead, the isometric isomorphism

$$\mathcal{I}: \mathfrak{G}' \to \mathfrak{G}, \quad g' \mapsto \mathcal{I}g', \quad \text{where} \quad (\mathcal{I}g', g)\_{\mathfrak{G}} = g'(g), \quad g \in \mathfrak{G},\tag{8.1.1}$$

is written explicitly whenever used. In fact, in this section the continuous (antilinear) functionals on G will be identified via the scalar product in H. In the following usually the notation

$$\langle g', g \rangle\_{\mathfrak{G}' \times \mathfrak{G}} := g'(g) \tag{8.1.2}$$

is employed for the (antilinear) dual pairing in (8.1.1), and when no confusion can arise the index is suppressed, that is, one writes g- , g = g- (g) for (8.1.2).

In the above setting the dual operator of the embedding operator ι : G → H is given by

$$\iota': \mathfrak{H} \hookrightarrow \mathfrak{G}', \quad (\iota' h)(g) = (h, \iota g)\_{\mathfrak{H}}, \quad g \in \mathfrak{G},\tag{8.1.3}$$

and in terms of the pairing ·, · this means

$$<\langle \iota' h, g \rangle = (h, \iota g)\_{\mathfrak{H}}, \qquad h \in \mathfrak{H}, \ g \in \mathfrak{G}. \tag{8.1.4}$$

Since the scalar product (·, ·)<sup>H</sup> is antilinear in the second argument one has ι - h, λg = λι - h, g for <sup>λ</sup> <sup>∈</sup> <sup>C</sup>, and hence <sup>ι</sup> - h is indeed antilinear. Observe that the dual operator ι in (8.1.3) is continuous since ι is continuous. Moreover, from the identity kerι - = (ran ι)⊥<sup>H</sup> it follows that ι is injective, and the range of ι is dense in G since kerι -- = (ran ι - )⊥G and ι = ι -as G is reflexive. Thus,

> <sup>G</sup> <sup>ι</sup> <sup>→</sup> <sup>H</sup> <sup>ι</sup> - → G with ran ι ⊂ H dense and ran ι - ⊂ Gdense,

and since G can be viewed as a subspace of H, and H can be viewed as a subspace of G- , instead of (8.1.4) also the notation

$$\langle h, g \rangle = (h, g)\_{\mathfrak{H}}, \qquad h \in \mathfrak{H}, \ g \in \mathfrak{G}, \tag{8.1.5}$$

will be used. The present situation will appear naturally in the context of Sobolev spaces later in this chapter. First the terminology will be fixed in the next definition.

**Definition 8.1.1.** Let G and H be Hilbert spaces such that G is densely and continuously embedded in H. Then the triple {G, H, G- } is a called a Gelfand triple or a rigged Hilbert space.

Assume now that {G, H, G- } is a Gelfand triple. Since the embedding operator ι : G → H is continuous, one has g <sup>H</sup> ≤ C g <sup>G</sup> for all g ∈ G with the constant C = ι > 0. Moreover, as G is a Hilbert space it follows from Lemma 5.1.9 that the symmetric form

$$\mathfrak{t}[g\_1, g\_2] := (g\_1, g\_2)\_{\mathfrak{G}}, \quad \text{dom}\, \mathfrak{t} = \mathfrak{G},$$

is densely defined and closed in H with a positive lower bound. Hence, by the first representation theorem (Theorem 5.1.18) there exists a unique self-adjoint operator T with the same positive lower bound in H, such that dom T ⊂ dom t and

$$(g\_1, g\_2)\_{\mathfrak{G}} = \mathfrak{t}[g\_1, g\_2] = (Tg\_1, g\_2)\_{\mathfrak{G}}, \qquad g\_1 \in \text{dom}\, T, \ g\_2 \in \mathfrak{G}.$$

Moreover, if R := T <sup>1</sup> <sup>2</sup> , then the second representation theorem (Theorem 5.1.23) implies dom R = dom t and

$$\mathfrak{a}(g\_1, g\_2)\_\mathfrak{G} = \mathfrak{t}[g\_1, g\_2] = (Rg\_1, Rg\_2)\_{\mathfrak{H}}, \qquad g\_1, g\_2 \in \text{dom}\, R = \mathfrak{G}.\tag{8.1.6}$$

Note that R is a uniformly positive self-adjoint operator in H.

In the next lemma some more properties of the Gelfand triple {G, H, G- } and the operator R are collected.

**Lemma 8.1.2.** Let {G, H, G- } be a Gelfand triple, let I : G- → G be the isometric isomorphism in (8.1.1), and let R be the uniformly positive self-adjoint operator in H such that (8.1.6) holds. Then the following statements hold:


$$(\iota\_{-}g', \iota\_{+}g)\_{\mathfrak{H}} = \langle g', g \rangle, \qquad g \in \mathfrak{G}, \ g' \in \mathfrak{G}'.\tag{8.1.7}$$


Proof. (i) Consider an element g- ∈ G and assume, in addition, that g- ∈ H. Then one has

$$\|g'\|\_{\mathfrak{G}'} = \sup\_{g \in \mathfrak{G} \backslash \{0\}} \frac{|g'(g)|}{\|g\|\_{\mathfrak{G}}} = \sup\_{g \in \mathfrak{G} \backslash \{0\}} \frac{|\langle g', g \rangle|}{\|g\|\_{\mathfrak{G}}} = \sup\_{g \in \mathfrak{G} \backslash \{0\}} \frac{|(g', g)\_{\mathfrak{G}}|}{\|g\|\_{\mathfrak{G}}},$$

where (8.1.2) was used in the second equality, and g- ∈ H and (8.1.5) were used in the last step. Since <sup>R</sup> is uniformly positive, one has <sup>R</sup>−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(H), and using (8.1.6) one obtains

$$\|g'\|\_{\mathfrak{G}'} = \sup\_{g \in \mathfrak{G} \backslash \{0\}} \frac{| (R^{-1}g', Rg)\_{\mathfrak{H}} |}{\| Rg \|\_{\mathfrak{H}}} = \sup\_{h \in \mathfrak{H} \backslash \{0\}} \frac{| (R^{-1}g', h)\_{\mathfrak{H}} |}{\| h \|\_{\mathfrak{H}}} = \| R^{-1}g' \|\_{\mathfrak{H}}.$$

Therefore, g- G- = R−<sup>1</sup>g- <sup>H</sup> for all g- ∈ H ⊂ G and as H is dense in G with respect to the norm · G- , one concludes that G coincides with the completion of H with respect to the norm <sup>R</sup>−<sup>1</sup> · H.

(ii) Observe that by the definition of ι<sup>+</sup> and (8.1.6) one has

$$\|\|\iota\_{+}g\|\|\_{\mathfrak{H}} = \|Rg\|\|\_{\mathfrak{H}} = \|g\|\|\_{\mathfrak{G}}, \quad g \in \mathfrak{G} = \operatorname{dom}\iota\_{+} = \operatorname{dom}R,$$

and hence ι<sup>+</sup> : G → H is isometric. Moreover, since R is bijective, it follows that ι<sup>+</sup> is an isometric isomorphism. Similarly, for g- ∈ Gone has

$$\|\iota\_- g'\|\_{\mathfrak{H}} = \|R \mathcal{I}g'\|\_{\mathfrak{H}} = \|\mathcal{I}g'\|\_{\mathfrak{G}} = \|g'\|\_{\mathfrak{G}'},$$

where in the last step it was used that I : G- → G is an isometric isomorphism. In order to check the identity (8.1.7), let g- ∈ G and g ∈ G. Then (8.1.6) and (8.1.1) imply

$$(\iota\_{-}g', \iota\_{+}g)\_{\mathfrak{H}} = (R\mathcal{I}g', Rg)\_{\mathfrak{H}} = (\mathcal{I}g', g)\_{\mathfrak{E}} = \langle g', g \rangle. \tag{8.1.8}$$

(iii) Let g- ∈ H ⊂ Gand g ∈ G. By (8.1.5), one has

$$\langle g', g \rangle = (g', g)\_{\mathfrak{H}} = (R^{-1}g', Rg)\_{\mathfrak{H}} = (R^{-1}g', \iota\_+ g)\_{\mathfrak{H}}$$

and comparing this with (8.1.8) it follows that R−1g- = ι−g for all g-∈ H.

(iv) By the definition of <sup>ι</sup><sup>+</sup> and (iii) it is clear that <sup>ι</sup>+ι−<sup>h</sup> <sup>=</sup> RR−1<sup>h</sup> <sup>=</sup> <sup>h</sup>. Similarly, <sup>ι</sup>−ι+<sup>g</sup> <sup>=</sup> <sup>ι</sup>−Rg <sup>=</sup> <sup>R</sup>−1Rg <sup>=</sup> <sup>g</sup> for <sup>g</sup> <sup>∈</sup> <sup>G</sup> by (iii).

(v) For h ∈ H one has <sup>R</sup>−2<sup>h</sup> <sup>G</sup> = <sup>R</sup>−1<sup>h</sup> <sup>H</sup> = h G by (8.1.6) and (i), and since H is dense in G- , it follows that R−<sup>2</sup> admits an extension to an isometric operator <sup>R</sup>−<sup>2</sup> : <sup>G</sup>- → G. Moreover, for h ∈ H it follows from the definition of ι<sup>−</sup> in (ii) and (iii) that

$$R\mathcal{I}h = \iota\_-h = R^{-1}h,\quad \text{and hence}\quad \mathcal{I}h = R^{-2}h.$$

Thus <sup>I</sup> and the restriction <sup>R</sup>−<sup>2</sup> of <sup>R</sup>−<sup>2</sup> coincide on the dense subspace <sup>H</sup> <sup>⊂</sup> <sup>G</sup>- . This implies <sup>I</sup> <sup>=</sup> <sup>R</sup>−2. -

Now a different point of view is taken on Gelfand triples. In the next lemma it is shown that the powers <sup>R</sup><sup>s</sup> for <sup>s</sup> <sup>≥</sup> 0 of a uniformly positive self-adjoint operator R in H give rise to Gelfand triples with certain compatibility properties.

**Lemma 8.1.3.** Let H be a Hilbert space and let R be a uniformly positive self-adjoint operator <sup>R</sup> in <sup>H</sup>. Let <sup>s</sup> <sup>≥</sup> <sup>0</sup> and equip <sup>G</sup><sup>s</sup> := dom <sup>R</sup><sup>s</sup> with the inner product

$$(h,k)\_{\mathfrak{G}\_s} := (R^s h, R^s k)\_{\mathfrak{H}}, \qquad h,k \in \text{dom}\, R^s. \tag{8.1.9}$$

Then G<sup>t</sup> ⊂ G<sup>s</sup> for all t ≥ s ≥ 0 and the following statements hold:

(i) {Gs, H, G- <sup>s</sup>} is a Gelfand triple and the assertions in Lemma 8.1.2 hold with R, G, and G replaced by Rs, Gs, and G- s, respectively.

(ii) If ι<sup>+</sup> : G<sup>1</sup> → H and ι<sup>−</sup> : G- <sup>1</sup> → H denote the isometric isomorphisms corresponding to the Gelfand triple {G1, H, G- <sup>1</sup>} such that

$$(\iota\_-g', \iota\_+g)\_{\mathfrak{H}} = \langle g', g \rangle\_{\mathfrak{G}'\_1 \times \mathfrak{G}\_1}, \quad g' \in \mathfrak{G}'\_1, g \in \mathfrak{G}\_1,$$

then their restrictions

$$\mu\_{+} = R : \mathfrak{G}\_{s+1} \to \mathfrak{G}\_{s} \quad \text{and} \quad \iota\_{-} = R^{-1} : \mathfrak{G}\_{s} \to \mathfrak{G}\_{s+1}, \quad s \ge 0,\tag{8.1.10}$$

are isometric isomorphisms such that ι+ι−g = g for g ∈ G<sup>s</sup> and ι−ι+l = l for l ∈ Gs+1.

Proof. (i) For <sup>s</sup> <sup>≥</sup> 0 the self-adjoint operator <sup>R</sup><sup>s</sup> is uniformly positive in <sup>H</sup> and hence G<sup>s</sup> = dom R<sup>s</sup> equipped with the inner product (8.1.9) is a Hilbert space which is dense in <sup>H</sup>. Moreover, from <sup>R</sup>−<sup>s</sup> <sup>∈</sup> **<sup>B</sup>**(H) and (8.1.9) one obtains that

$$\|g\|\_{\mathfrak{H}} = \|R^{-s}R^s g\|\_{\mathfrak{H}} \le \|R^{-s}\| \|R^s g\|\_{\mathfrak{H}} = \|R^{-s}\| \|g\|\_{\mathfrak{H}\_s}, \quad g \in \mathfrak{G}\_s,$$

which shows that the embedding G<sup>s</sup> → H is continuous. Therefore, if G- <sup>s</sup> denotes the dual of Gs, then {Gs, H, G- <sup>s</sup>} is a Gelfand triple. Comparing (8.1.9) with (8.1.6) shows that the operator R<sup>s</sup> plays the same role as the representing operator of the inner product in (8.1.6). Hence, the assertions of Lemma 8.1.2 are valid with R, G, and G replaced by Rs, Gs, and G- <sup>s</sup>, respectively.

(ii) Let <sup>s</sup> <sup>≥</sup> 0 and consider <sup>l</sup> <sup>∈</sup> <sup>G</sup>s+1 = dom <sup>R</sup>s+1. It follows from (8.1.9) that

$$\|Rl\|\_{\mathfrak{G}\_s} = \|R^s Rl\|\_{\mathfrak{G}} = \|R^{s+1}l\|\_{\mathfrak{G}} = \|l\|\_{\mathfrak{G}\_{s+1}}$$

and hence ι<sup>+</sup> = R : Gs+1 → G<sup>s</sup> is isometric. In order to verify that this mapping is onto let k ∈ Gs. Then k ∈ H, and as R is bijective, there exists l ∈ dom R such that Rl <sup>=</sup> <sup>k</sup>. Therefore, <sup>l</sup> <sup>=</sup> <sup>R</sup>−1<sup>k</sup> and as <sup>k</sup> <sup>∈</sup> <sup>G</sup><sup>s</sup> = dom <sup>R</sup><sup>s</sup> one concludes <sup>l</sup> <sup>∈</sup> dom <sup>R</sup>s+1 <sup>=</sup> <sup>G</sup>s+1. This shows that <sup>ι</sup><sup>+</sup> <sup>=</sup> <sup>R</sup> : <sup>G</sup>s+1 <sup>→</sup> <sup>G</sup><sup>s</sup> is an isometric isomorphism for <sup>s</sup> <sup>≥</sup> 0. A similar reasoning shows that <sup>ι</sup><sup>−</sup> <sup>=</sup> <sup>R</sup>−<sup>1</sup> : <sup>G</sup><sup>s</sup> <sup>→</sup> <sup>G</sup>s+1 is an isometric isomorphism for s ≥ 0. The remaining assertions ι+ι−g = g for <sup>g</sup> <sup>∈</sup> <sup>G</sup><sup>s</sup> and <sup>ι</sup>−ι+<sup>l</sup> <sup>=</sup> <sup>l</sup> for <sup>l</sup> <sup>∈</sup> <sup>G</sup>s+1 follow immediately from (8.1.10). -

## **8.2 Sobolev spaces,** *C***<sup>2</sup>-domains, and trace operators**

In this section Sobolev spaces on <sup>R</sup>n, open subsets Ω <sup>⊂</sup> <sup>R</sup>n, and on the boundaries ∂Ω of C2-domains are defined and some of their features are briefly recalled. Furthermore, the mapping properties of the Dirichlet and Neumann trace map on a C2-domain Ω are recalled and the first Green identity is established.

For <sup>s</sup> <sup>≥</sup> 0 the scale of <sup>L</sup>2-based Sobolev spaces <sup>H</sup>s(Rn) is defined with the help of the (classical) Fourier transform <sup>F</sup> <sup>∈</sup> **<sup>B</sup>**(L2(Rn)) by

$$H^s(\mathbb{R}^n) := \left\{ f \in L^2(\mathbb{R}^n) : (1 + |\cdot|^2)^{s/2} \mathcal{F}f \in L^2(\mathbb{R}^n) \right\},$$

and H<sup>s</sup>(R<sup>n</sup>) is equipped with the natural norm

$$\|\|f\|\|\_{H^s(\mathbb{R}^n)} := \left| \left| (1+|\cdot|^2)^{s/2} \mathcal{F}f \right| \right|\_{L^2(\mathbb{R}^n)}, \qquad f \in H^s(\mathbb{R}^n),$$

and corresponding scalar product

$$(f,g)\_{H^s(\mathbb{R}^n)} := \left( (1+|\cdot|^2)^{s/2} \mathcal{F}f, (1+|\cdot|^2)^{s/2} \mathcal{F}g \right)\_{L^2(\mathbb{R}^n)}, \quad f, g \in H^s(\mathbb{R}^n).$$

Then the space <sup>H</sup><sup>s</sup>(R<sup>n</sup>) is a separable Hilbert space for every <sup>s</sup> <sup>≥</sup> 0 and one has H0(Rn) = L2(Rn). It is also useful to note that the space C<sup>∞</sup> <sup>0</sup> (Rn) is dense in <sup>H</sup>s(Rn) for all <sup>s</sup> <sup>≥</sup> 0. Since the Fourier transform is a unitary operator in <sup>L</sup>2(Rn), it is clear that

$$\mathcal{R} = \mathcal{F}^{-1} (1 + |\cdot|^2)^{1/2} \mathcal{F}$$

is a uniformly positive self-adjoint operator in L2(Rn) such that dom R = H1(Rn). Furthermore, for each s ≥ 0 one has

$$\mathcal{R}^s = \mathcal{F}^{-1} (1 + |\cdot|^2)^{s/2} \mathcal{F}$$

and hence <sup>R</sup><sup>s</sup> for <sup>s</sup> <sup>≥</sup> 0 is also a uniformly positive self-adjoint operator in <sup>L</sup>2(Rn) such that dom R<sup>s</sup> = Hs(Rn). Note that the scalar product in Hs(Rn) satisfies

$$(f,g)\_{H^s(\mathbb{R}^n)} = (\mathcal{R}^s f, \mathcal{R}^s g)\_{L^2(\mathbb{R}^n)}, \qquad f, g \in H^s(\mathbb{R}^n),$$

for all s ≥ 0. In particular, R plays the same role as the operator R in (8.1.6) and <sup>R</sup><sup>s</sup> plays the same role as the operator <sup>R</sup><sup>s</sup> in (8.1.9). Hence, <sup>R</sup>s, <sup>s</sup> <sup>≥</sup> 0, gives rise to a Gelfand triple {Hs(Rn), L2(Rn), H−s(Rn)}, where <sup>H</sup>−s(Rn) denotes the dual space consisting of continuous antilinear functionals on Hs(Rn). From Lemma 8.1.3 it is now clear that the restrictions <sup>R</sup> : <sup>H</sup>s+1(Rn) <sup>→</sup> <sup>H</sup>s(Rn) and <sup>R</sup>−<sup>1</sup> : <sup>H</sup>s(Rn) <sup>→</sup> <sup>H</sup>s+1(Rn) are isometric isomorphisms for <sup>s</sup> <sup>≥</sup> 0.

For a nonempty open subset Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> and <sup>s</sup> <sup>≥</sup> 0 define

$$H^s(\Omega) := \left\{ f \in L^2(\Omega) : \text{there exists } g \in H^s(\mathbb{R}^n) \text{ such that } f = g|\_{\Omega} \right\}$$

and endow this space with the norm

$$\|f\|\|\_{H^s(\Omega)} := \inf\_{\substack{g \in H^s(\mathbb{R}^n) \\ f = g\vert\_{\Omega}}} \|g\|\_{H^s(\mathbb{R}^n)}, \qquad f \in H^s(\Omega). \tag{8.2.1}$$

The space Hs(Ω) is a separable Hilbert space; the corresponding scalar product will be denoted by (·, ·)Hs(Ω). For s ≥ 0 the space C∞(Ω) := {ϕ|<sup>Ω</sup> : ϕ ∈ C<sup>∞</sup> <sup>0</sup> (Rn)} is dense in Hs(Ω). The closure of C<sup>∞</sup> <sup>0</sup> (Ω) in Hs(Ω) is a closed subspace of Hs(Ω); it is denoted by

$$H\_0^s(\Omega) := \overline{C\_0^\infty(\Omega)}^{\|\cdot\|\_{H^s(\Omega)}}.\tag{8.2.2}$$

In order to define Sobolev spaces on the boundary <sup>∂</sup>Ω of some domain Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> assume first that <sup>φ</sup> : <sup>R</sup>n−<sup>1</sup> <sup>→</sup> <sup>R</sup> is a <sup>C</sup>2-function. The vectors in <sup>R</sup>n−<sup>1</sup> will be denoted by x- = (x1,...,x<sup>n</sup>−<sup>1</sup>) <sup>∈</sup> <sup>R</sup><sup>n</sup>−<sup>1</sup> and the notation (x- , xn) is used for (x1,...,xn) <sup>∈</sup> <sup>R</sup><sup>n</sup>. Then the domain

$$\Omega\_{\phi} := \left\{ (x', x\_n)^{\top} \in \mathbb{R}^n : x\_n < \phi(x') \right\} \tag{8.2.3}$$

is called a C<sup>2</sup>-hypograph and its boundary is given by

$$
\partial \Omega\_{\phi} = \left\{ (x', \phi(x'))^{\top} \in \mathbb{R}^n : x' \in \mathbb{R}^{n-1} \right\}.
$$

For a measurable function <sup>h</sup> : <sup>∂</sup>Ω<sup>φ</sup> <sup>→</sup> <sup>C</sup> the surface integral on <sup>∂</sup>Ω<sup>φ</sup> is defined as

$$\int\_{\partial\Omega\_{\phi}} h \, d\sigma := \int\_{\mathbb{R}^{n-1}} h(x', \phi(x')) \sqrt{1 + |\nabla\phi(x')|^2} \, dx'. \tag{8.2.4}$$

If **1**<sup>B</sup> denotes the characteristic function of a Borel set B ⊂ ∂Ωφ, then the surface integral in (8.2.4) induces a surface measure

$$
\sigma(B) = \int\_{\partial\Omega\_{\phi}} \mathbf{1}\_{B} \, d\sigma. \tag{8.2.5}
$$

This surface measure also gives rise to the usual L2-space on ∂Ωφ, which will be denoted by <sup>L</sup>2(∂Ωφ). Furthermore, for <sup>s</sup> <sup>∈</sup> [0, 2] define the Sobolev space of order s on ∂Ω<sup>φ</sup> by

$$H^s(\partial \Omega\_\phi) := \left\{ h \in L^2(\partial \Omega\_\phi) : x' \mapsto h(x', \phi(x')) \in H^s(\mathbb{R}^{n-1}) \right\}$$

and equip Hs(∂Ωφ) with the corresponding Hilbert space scalar product

$$(h,k)\_{H^s(\partial\Omega\_\phi)} := \left(h(\cdot,\phi(\cdot)),k(\cdot,\phi(\cdot))\right)\_{H^s(\mathbb{R}^{n-1})}, \quad h,k \in H^s(\partial\Omega\_\phi). \tag{8.2.6}$$

Note that the operator <sup>V</sup><sup>φ</sup> : <sup>H</sup>s(∂Ωφ) <sup>→</sup> <sup>H</sup>s(Rn−1) that maps <sup>h</sup> <sup>∈</sup> <sup>H</sup>s(∂Ωφ) to the function x- → h(x- , φ(x- )) <sup>∈</sup> <sup>H</sup>s(Rn−1) is an isometric isomorphism.

In the next step the notion of C2-hypograph is replaced by a bounded domain with a C2-smooth boundary, that is, the boundary is locally the boundary of a C2-hypograph.

**Definition 8.2.1.** A bounded nonempty open subset Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> is called a <sup>C</sup>2-domain if there exist open sets <sup>U</sup>1,...,U<sup>l</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> and (possibly up to rotations of coordinates) <sup>C</sup>2-hypographs Ω1,..., <sup>Ω</sup><sup>l</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> such that

$$
\partial \Omega \subset \bigcup\_{j=1}^{l} U\_j \quad \text{and} \quad \Omega \cap U\_j = \Omega\_j \cap U\_j, \quad j = 1, \dots, l.
$$

Let Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain as in Definition 8.2.1. Then the boundary <sup>∂</sup><sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is compact and there exists a partition of unity subordinate to the open cover {Uj} of ∂Ω, that is, there exist functions η<sup>j</sup> ∈ C<sup>∞</sup> <sup>0</sup> (Rn), j = 1,...,l, with supp <sup>η</sup><sup>j</sup> <sup>⊂</sup> <sup>U</sup><sup>j</sup> such that 0 <sup>≤</sup> <sup>η</sup><sup>j</sup> (x) <sup>≤</sup> 1 for all <sup>x</sup> <sup>∈</sup> <sup>R</sup><sup>n</sup> and <sup>1</sup><sup>l</sup> <sup>j</sup>=1 η<sup>j</sup> (x) = 1 for all <sup>x</sup> <sup>∈</sup> <sup>∂</sup>Ω. For a measurable function <sup>h</sup> : <sup>∂</sup><sup>Ω</sup> <sup>→</sup> <sup>C</sup> the surface integral on <sup>∂</sup>Ω is defined as

$$\int\_{\partial\Omega} h \, d\sigma := \sum\_{j=1}^{l} \int\_{\mathbb{R}^{n-1}} \eta\_j(x', \phi\_j(x')) h(x', \phi\_j(x')) \sqrt{1 + |\nabla \phi\_j(x')|^2} \, dx',$$

where the <sup>C</sup><sup>2</sup>-functions <sup>φ</sup><sup>j</sup> : <sup>R</sup><sup>n</sup>−<sup>1</sup> <sup>→</sup> <sup>R</sup> define the <sup>C</sup><sup>2</sup>-hypographs Ω<sup>j</sup> as in (8.2.3) and the possible rotation of coordinates is suppressed. This surface integral induces a surface measure and the notion of an L2-space L2(∂Ω) in the same way as in (8.2.4) and (8.2.5). In the present setting the Sobolev space <sup>H</sup>s(∂Ω) for <sup>s</sup> <sup>∈</sup> [0, 2] is now defined by

$$H^s(\partial \Omega) := \left\{ h \in L^2(\partial \Omega) : \eta\_j h \in H^s(\partial \Omega\_j), \ j = 1, \dots, l \right\}$$

and is equipped with the corresponding Hilbert space scalar product

$$(h,k)\_{H^s(\partial\Omega)} = \sum\_{j=1}^l (\eta\_j h, \eta\_j k)\_{H^s(\partial\Omega\_j)}, \quad h, k \in H^s(\partial\Omega). \tag{8.2.7}$$

It follows from the construction that Hs(∂Ω) is densely and continuously embedded in <sup>L</sup>2(∂Ω) for <sup>s</sup> <sup>∈</sup> [0, 2]. Furthermore, since <sup>∂</sup>Ω is a compact subset of <sup>R</sup>n, the embedding

$$H^t(\partial\Omega) \hookrightarrow H^s(\partial\Omega), \qquad 0 \le s < t \le 2,\tag{8.2.8}$$

is compact; see, e.g., [774, Theorem 7.10].

For later purposes it is convenient to use an equivalent characterization of the spaces Hs(∂Ω) via interpolation; cf. [573, Theorem B.11]. More precisely, as in (8.1.6) it follows that there exists a unique uniformly positive self-adjoint operator Q in L2(∂Ω) such that

$$\text{dom}\,Q = H^2(\partial\Omega) \quad \text{and} \quad (h,k)\_{H^2(\partial\Omega)} = (Qh,Qk)\_{L^2(\partial\Omega)}\tag{8.2.9}$$

for all h, k <sup>∈</sup> <sup>H</sup>2(∂Ω). It can be shown that the spaces <sup>H</sup>s(∂Ω) coincide with the domains dom <sup>Q</sup>s/<sup>2</sup> for <sup>s</sup> <sup>∈</sup> [0, 2] and that (Qs/2·, Qs/2·)L2(∂Ω) defines a scalar product and equivalent norm in Hs(∂Ω). The dual space of the antilinear continuous functionals on <sup>H</sup>s(∂Ω) is denoted by <sup>H</sup>−s(∂Ω), <sup>s</sup> <sup>∈</sup> [0, 2]. Then one obtains the following statement from Lemma 8.1.2 and Lemma 8.1.3.

**Corollary 8.2.2.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain and let <sup>s</sup> <sup>∈</sup> [0, 2]. Then the following statements hold:

(i) {Hs(∂Ω), L2(∂Ω), H−s(∂Ω)} is a Gelfand triple and the assertions in Lemma 8.1.2 hold with R, G, and G replaced by Qs/2, Hs(∂Ω), and H−s(∂Ω), respectively.

(ii) If <sup>ι</sup><sup>+</sup> : <sup>H</sup><sup>1</sup>/<sup>2</sup>(∂Ω) <sup>→</sup> <sup>L</sup><sup>2</sup>(∂Ω) and <sup>ι</sup><sup>−</sup> : <sup>H</sup>−1/<sup>2</sup>(∂Ω) <sup>→</sup> <sup>L</sup><sup>2</sup>(∂Ω) denote the isometric isomorphisms from Lemma 8.1.2 (ii) corresponding to the Gelfand triple {H<sup>1</sup>/<sup>2</sup>(∂Ω), L<sup>2</sup>(∂Ω), H−1/<sup>2</sup>(∂Ω)} such that

$$(\iota\_-\varphi,\iota\_+\psi)\_{L^2(\partial\Omega)} = \langle\varphi,\psi\rangle\_{H^{-1/2}(\partial\Omega)\times H^{1/2}(\partial\Omega)}$$

holds for <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>−1/<sup>2</sup>(∂Ω) and <sup>ψ</sup> <sup>∈</sup> <sup>H</sup><sup>1</sup>/<sup>2</sup>(∂Ω), then for <sup>s</sup> <sup>∈</sup> [0, <sup>3</sup>/2] their restrictions

$$\iota\_+ = Q^{1/4} : H^{s+1/2}(\partial \Omega) \to H^s(\partial \Omega),$$

and

$$\iota\_- = Q^{-1/4} : H^s(\partial \Omega) \to H^{s+1/2}(\partial \Omega),$$

are isometric isomorphisms such that <sup>ι</sup>+ι−<sup>φ</sup> <sup>=</sup> <sup>φ</sup> for <sup>φ</sup> <sup>∈</sup> <sup>H</sup>s(∂Ω) and <sup>ι</sup>−ι+<sup>χ</sup> <sup>=</sup> <sup>χ</sup> for <sup>χ</sup> <sup>∈</sup> <sup>H</sup>s+1/2(∂Ω); here <sup>Q</sup> is the uniformly positive self-adjoint operator in (8.2.9).

Assume now that Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain as in Definition 8.2.1. The weak derivative of order <sup>|</sup>α<sup>|</sup> of an <sup>L</sup>2-function <sup>f</sup> is denoted by <sup>D</sup>α<sup>f</sup> in the following; as usual, here <sup>α</sup> <sup>∈</sup> <sup>N</sup><sup>n</sup> <sup>0</sup> stands for a multiindex and |α| = α<sup>1</sup> + ··· + αn. Then for <sup>k</sup> <sup>∈</sup> <sup>N</sup><sup>0</sup> one has

$$H^k(\Omega) = \left\{ f \in L^2(\Omega) : D^\alpha f \in L^2(\Omega) \text{ for all } \alpha \in \mathbb{N}\_0^n \text{ with } |\alpha| \le k \right\}$$

and

$$\|f\|\_{k} := \sum\_{|\alpha| \le k} \|D^{\alpha}f\|\_{L^{2}(\Omega)}, \quad f \in H^{k}(\Omega), \tag{8.2.10}$$

is equivalent to the norm on Hk(Ω) in (8.2.1); cf. [573, Theorem 3.30]. Recall also that for <sup>k</sup> <sup>∈</sup> <sup>N</sup> there exists <sup>C</sup><sup>k</sup> <sup>&</sup>gt; 0 such that the Poincar´e inequality

$$\|f\|\_{k} \le C\_{k} \sum\_{|\alpha|=k} \|D^{\alpha}f\|\_{L^{2}(\Omega)}, \qquad f \in H\_{0}^{k}(\Omega), \tag{8.2.11}$$

is valid. In particular, for f ∈ C<sup>∞</sup> <sup>0</sup> (Ω) and k = 2, integration by parts and the Schwarz theorem give

$$\begin{aligned} \sum\_{|\alpha|=2} \|D^{\alpha}f\|\_{L^2(\Omega)}^2 &= \sum\_{|\alpha|=2} (D^{\alpha}f, D^{\alpha}f)\_{L^2(\Omega)} \\ &= \sum\_{j,k=1}^n (\partial\_j\partial\_k f, \partial\_j\partial\_k f)\_{L^2(\Omega)} \\ &= \sum\_{j,k=1}^n (\partial\_j^2 f, \partial\_k^2 f)\_{L^2(\Omega)} \\ &= \|\Delta f\|\_{L^2(\Omega)}^2, \end{aligned}$$

and this equality extends to all <sup>f</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup> <sup>0</sup> (Ω) by (8.2.2). As a consequence one obtains the following useful fact.

**Lemma 8.2.3.** The mapping f → Δf <sup>L</sup>2(Ω) is a norm on H<sup>2</sup> <sup>0</sup> (Ω) which is equivalent to the norms · <sup>2</sup> and · <sup>H</sup>2(Ω) in (8.2.10) and (8.2.1), respectively.

Let again Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup><sup>2</sup>-domain and denote the unit normal vector field pointing outwards of ∂Ω by ν. The notion of trace operator or trace map and some of their properties are discussed next. Recall first that the mapping

$$C^{\infty}(\overline{\Omega}) \ni f \mapsto \left\{ f|\_{\partial\Omega}, \frac{\partial f}{\partial \nu} \Big|\_{\partial\Omega} \right\} \in H^{3/2}(\partial\Omega) \times H^{1/2}(\partial\Omega)$$

extends by continuity to a continuous operator

$$H^2(\Omega) \ni f \mapsto \{\tau\_\mathcal{D} f, \tau\_\mathcal{N} f\} \in H^{3/2}(\partial\Omega) \times H^{1/2}(\partial\Omega),\tag{8.2.12}$$

which is surjective; here

$$\tau\_{\rm D}: H^2(\Omega) \to H^{3/2}(\partial\Omega) \tag{8.2.13}$$

denotes the Dirichlet trace operator and

$$
\tau\_{\rm N}: H^2(\Omega) \to H^{1/2}(\partial\Omega) \tag{8.2.14}
$$

denotes the Neumann trace operator. In particular, for all f ∈ C∞(Ω) one has

$$
\tau\_{\mathcal{D}}f = f|\_{\Omega} \quad \text{and} \quad \tau\_{\mathcal{N}}f = \frac{\partial f}{\partial \nu}|\_{\partial \Omega},
$$

respectively. With the help of the trace operators one has another useful characterization of the space H<sup>2</sup> <sup>0</sup> (Ω) in (8.2.2), namely,

$$H\_0^2(\Omega) = \left\{ f \in H^2(\Omega) : \tau\_\mathcal{D} f = \tau\_\mathcal{N} f = 0 \right\}.\tag{8.2.15}$$

It will also be used that the Dirichlet trace operator <sup>τ</sup><sup>D</sup> : <sup>H</sup>2(Ω) <sup>→</sup> <sup>H</sup>3/2(∂Ω) admits a continuous surjective extension

$$
\tau\_{\rm D}^{(1)}: H^1(\Omega) \to H^{1/2}(\partial \Omega), \tag{8.2.16}
$$

which, in analogy to (8.2.15), leads to the characterization

$$H\_0^1(\Omega) = \{ f \in H^1(\Omega) : \tau\_\mathcal{D}^{(1)} f = 0 \}. \tag{8.2.17}$$

Recall next that for <sup>f</sup> <sup>∈</sup> <sup>H</sup>2(Ω) and <sup>g</sup> <sup>∈</sup> <sup>H</sup>1(Ω) the first Green identity

$$(-\Delta f, g)\_{L^2(\Omega)} = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} - \left(\tau\_\mathcal{N} f, \tau\_\mathcal{D}^{(1)} g\right)\_{L^2(\partial\Omega)}\tag{8.2.18}$$

holds. Note that <sup>τ</sup>Nf, τD<sup>g</sup> <sup>∈</sup> <sup>H</sup><sup>1</sup>/<sup>2</sup>(∂Ω) by (8.2.14) and (8.2.16). If, in addition, also <sup>g</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup>(Ω), then one concludes from (8.2.18) the second Green identity

$$(-\Delta f, g)\_{L^2(\Omega)} - (f, -\Delta g)\_{L^2(\Omega)} = (\tau\_\mathcal{D} f, \tau\_\mathcal{N} g)\_{L^2(\partial\Omega)} - (\tau\_\mathcal{N} f, \tau\_\mathcal{D} g)\_{L^2(\partial\Omega)}, \tag{8.2.19}$$

which is valid for all f,g <sup>∈</sup> <sup>H</sup><sup>2</sup>(Ω).

In the next lemma it will be shown that the Neumann trace operator τ<sup>N</sup> in (8.2.14) admits an extension to the subspace of H<sup>1</sup>(Ω) consisting of all those functions <sup>f</sup> <sup>∈</sup> <sup>H</sup><sup>1</sup>(Ω) such that Δ<sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup>(Ω), and it turns out that the first Green identity (8.2.18) remains valid in an extended form. Here, and in the following, the expression Δf is understood in the sense of distributions. If, in addition, one has that Δ<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω), then Δ<sup>f</sup> is a regular distribution generated by the function <sup>Δ</sup><sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω) via

$$(\Delta f)(\varphi) = \int\_{\Omega} (\Delta f)(x) \, \overline{\varphi(x)} \, dx, \qquad \varphi \in C\_0^{\infty}(\Omega). \tag{8.2.20}$$

**Lemma 8.2.4.** For <sup>f</sup> <sup>∈</sup> <sup>H</sup>1(Ω) with <sup>Δ</sup><sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω) there exists a unique element <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>−1/2(∂Ω) such that

$$(-\Delta f, g)\_{L^2(\Omega)} = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} - \left< \varphi, \tau\_{\mathbb{D}}^{(1)} g \right>\_{H^{-1/2}(\partial \Omega) \times H^{1/2}(\partial \Omega)}\tag{8.2.21}$$

holds for all <sup>g</sup> <sup>∈</sup> <sup>H</sup>1(Ω). In the following the notation <sup>τ</sup> (1) <sup>N</sup> f := ϕ will be used.

Proof. Notice that there exists a bounded right inverse of τ (1) <sup>D</sup> in (8.2.16), that is, there is a bounded operator <sup>η</sup> : <sup>H</sup>1/2(∂Ω) <sup>→</sup> <sup>H</sup>1(Ω) with the property

$$
\tau\_\mathcal{D}^{(1)}\eta\psi = \psi, \qquad \psi \in H^{1/2}(\partial\Omega).
$$

For a fixed <sup>f</sup> <sup>∈</sup> <sup>H</sup>1(Ω) such that Δ<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω) define the antilinear functional <sup>ϕ</sup> : <sup>H</sup>1/2(∂Ω) <sup>→</sup> <sup>C</sup> by

$$\varphi(\psi) := (\nabla f, \nabla \eta \psi)\_{L^2(\Omega; \mathbb{C}^n)} + (\Delta f, \eta \psi)\_{L^2(\Omega)}, \quad \psi \in H^{1/2}(\partial \Omega). \tag{8.2.22}$$

Then one has

$$\begin{aligned} |\varphi(\psi)| &\le \|\nabla f\|\_{L^2(\Omega; \mathbb{C}^n)} \|\nabla \eta \psi\|\_{L^2(\Omega; \mathbb{C}^n)} + \|\Delta f\|\_{L^2(\Omega)} \|\eta \psi\|\_{L^2(\Omega)} \\ &\le C \|\|\eta \psi\|\|\_{H^1(\Omega)} \\ &\le C' \|\psi\|\|\_{H^{1/2}(\partial \Omega)} \end{aligned}$$

with some constants C, C- <sup>&</sup>gt; 0, and hence <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>−1/2(∂Ω). Thus, (8.2.22) can also be written in the form

$$\langle \varphi, \psi \rangle\_{H^{-1/2}(\partial \Omega) \times H^{1/2}(\partial \Omega)} = (\nabla f, \nabla \eta \psi)\_{L^2(\Omega; \mathbb{C}^n)} + (\Delta f, \eta \psi)\_{L^2(\Omega)},\tag{8.2.23}$$

where <sup>ψ</sup> <sup>∈</sup> <sup>H</sup>1/2(∂Ω). Now let <sup>g</sup> <sup>∈</sup> <sup>H</sup>1(Ω) and set <sup>g</sup><sup>0</sup> := <sup>g</sup> <sup>−</sup> ητ (1) <sup>D</sup> g. Then it follows from the characterization of the space H<sup>1</sup> <sup>0</sup> (Ω) in (8.2.17) that <sup>g</sup><sup>0</sup> <sup>∈</sup> <sup>H</sup><sup>1</sup> <sup>0</sup> (Ω), and hence (8.2.2) shows that there is a sequence (gm) ⊂ C<sup>∞</sup> <sup>0</sup> (Ω) such that g<sup>m</sup> → g<sup>0</sup> in H<sup>1</sup>(Ω). It follows that

$$\begin{aligned} (\nabla f, \nabla g\_0)\_{L^2(\Omega; \mathbb{C}^n)} &= \lim\_{m \to \infty} (\nabla f, \nabla g\_m)\_{L^2(\Omega; \mathbb{C}^n)} \\ &= - \lim\_{m \to \infty} (\Delta f, g\_m)\_{L^2(\Omega)} \\ &= - (\Delta f, g\_0)\_{L^2(\Omega)} \end{aligned}$$

and one obtains, together with (8.2.23) (and with ψ = τ (1) <sup>D</sup> g), that

$$\begin{split} (\nabla f, \nabla g)\_{L^{2}(\Omega; \mathbb{C}^{n})} &= \left(\nabla f, \nabla \left(g\_{0} + \eta \tau\_{\mathcal{D}}^{(1)} g\right)\right)\_{L^{2}(\Omega; \mathbb{C}^{n})} \\ &= -\left(\Delta f, g\_{0}\right)\_{L^{2}(\Omega)} + \left(\nabla f, \nabla \left(\eta \tau\_{\mathcal{D}}^{(1)} g\right)\right)\_{L^{2}(\Omega; \mathbb{C}^{n})} \\ &= -\left(\Delta f, g\_{0}\right)\_{L^{2}(\Omega)} + \left<\varphi, \tau\_{\mathcal{D}}^{(1)} g\right>\_{H^{-1/2}(\partial\Omega) \times H^{1/2}(\partial\Omega)} - \left<\Delta f, \eta \tau\_{\mathcal{D}}^{(1)} g\right>\_{L^{2}(\Omega)} \\ &= \left(-\Delta f, g\right)\_{L^{2}(\Omega)} + \left<\varphi, \tau\_{\mathcal{D}}^{(1)} g\right>\_{H^{-1/2}(\partial\Omega) \times H^{1/2}(\partial\Omega)}. \end{split}$$

This shows that <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>−1/2(∂Ω) in (8.2.22)–(8.2.23) satisfies (8.2.21).

It remains to check that <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>−1/2(∂Ω) in (8.2.21) is unique. Suppose that <sup>ϕ</sup>1, ϕ<sup>2</sup> <sup>∈</sup> <sup>H</sup>−1/2(∂Ω) satisfy (8.2.21). Then

$$\left\langle \varphi\_1 - \varphi\_2, \tau\_{\mathbf{D}}^{(1)} g \right\rangle\_{H^{-1/2}(\partial \Omega) \times H^{1/2}(\partial \Omega)} = 0$$

for all <sup>g</sup> <sup>∈</sup> <sup>H</sup>1(Ω). As <sup>τ</sup> (1) <sup>D</sup> : <sup>H</sup>1(Ω) <sup>→</sup> <sup>H</sup>1/2(∂Ω) is surjective it follows that <sup>ϕ</sup><sup>1</sup> <sup>−</sup> <sup>ϕ</sup><sup>2</sup> = 0 and hence <sup>ϕ</sup> in (8.2.21) is unique. -

**Remark 8.2.5.** The assertion in Lemma 8.2.4 and its proof extend in a natural manner to all <sup>f</sup> <sup>∈</sup> <sup>H</sup>1(Ω) such that <sup>−</sup>Δ<sup>f</sup> <sup>∈</sup> <sup>H</sup>1(Ω)∗. In this situation there still exists a unique element <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>−1/2(∂Ω) such that (instead of (8.2.21)) one has the slightly more general first Green identity

$$\langle -\Delta f, g \rangle\_{H^1(\Omega)^\* \times H^1(\Omega)} = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} - \langle \varphi, \tau\_{\mathbb{D}}^{(1)} g \rangle\_{H^{-1/2}(\partial \Omega) \times H^{1/2}(\partial \Omega)}$$

for all <sup>g</sup> <sup>∈</sup> <sup>H</sup>1(Ω); cf. [573, Lemma 4.3].

## **8.3 Trace maps for the maximal Schr¨odinger operator**

The differential expression −Δ + V is considered on a bounded domain Ω, where the function V ∈ L∞(Ω) is assumed to be real. One then associates with −Δ + V a preminimal, minimal and maximal operator in L2(Ω), which are adjoints of each other. Furthermore, the Dirichlet and Neumann operators are defined via the corresponding sesquilinear form and the first representation theorem, and some of their properties are collected. In the case where Ω is a bounded C2-domain it is shown in Theorem 8.3.9 and Theorem 8.3.10 that the Dirichlet and Neumann trace operators in the previous section admit continuous extensions to the maximal domain; this is a key ingredient in the construction of a boundary triplet in the next section.

Let <sup>n</sup> <sup>≥</sup> 2, let Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded domain, and assume that the function V ∈ L∞(Ω) is real. The preminimal operator associated to the differential expression −Δ + V is defined as

$$T\_0 = -\Delta + V, \qquad \text{dom}\, T\_0 = C\_0^{\infty}(\Omega).$$

It follows immediately from

$$(T\_0 f, f)\_{L^2(\Omega)} = (\nabla f, \nabla f)\_{L^2(\Omega; \mathbb{C}^n)} + (Vf, f)\_{L^2(\Omega)}, \quad f \in \text{dom}\, T\_0,$$

that T<sup>0</sup> is a densely defined symmetric operator in L2(Ω) which is bounded from below with v<sup>−</sup> := essinf V as a lower bound, so that T<sup>0</sup> − v<sup>−</sup> is nonnegative. Actually, for f ∈ dom T<sup>0</sup> one has

$$\left( (T\_0 - v\_-)f, f \right)\_{L^2(\Omega)} = (\nabla f, \nabla f)\_{L^2(\Omega; \mathbb{C}^n)} + \left( (V - v\_-)f, f \right)\_{L^2(\Omega)} \ge \| \nabla f \|\_{L^2(\Omega; \mathbb{C}^n)}^2$$

and hence, by the Poincar´e inequality (8.2.11),

$$\left( (T\_0 - v\_{-})f, f \right)\_{L^2(\Omega)} \ge C \| f \|\_{1}^2 \ge C \| f \|\_{L^2(\Omega)}^2 \tag{8.3.1}$$

with some constant C > 0. This shows that T<sup>0</sup> − v<sup>−</sup> is uniformly positive.

The closure of T<sup>0</sup> in L2(Ω) is the minimal operator

$$T\_{\rm min} = -\Delta + V, \qquad \text{dom}\, T\_{\rm min} = H\_0^2(\Omega). \tag{8.3.2}$$

In fact, using Lemma 8.2.3 and the fact that V ∈ L∞(Ω) one obtains that the graph norm

$$\|\cdot\|\_{L^2(\Omega)} + \|T\_{\text{min}} \cdot\|\_{L^2(\Omega)}$$

is equivalent to the H2-norm on the closed subspace H<sup>2</sup> <sup>0</sup> (Ω) of H2(Ω). Hence, Tmin is a closed operator in L2(Ω) and it follows from (8.2.2) that T<sup>0</sup> = Tmin. Therefore, <sup>T</sup>min is a densely defined closed symmetric operator in <sup>L</sup>2(Ω) and <sup>T</sup>min <sup>−</sup> <sup>v</sup><sup>−</sup> is uniformly positive.

Besides the preminimal and minimal operator, also the maximal operator <sup>T</sup>max associated with <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω) will be important in the sequel; it is defined by

$$\begin{aligned} T\_{\text{max}} &= -\Delta + V, \\ \text{dom}\, T\_{\text{max}} &= \left\{ f \in L^2(\Omega) : -\Delta f + Vf \in L^2(\Omega) \right\}. \end{aligned} \tag{8.3.3}$$

Here the expression Δ<sup>f</sup> for <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω) is understood in the distributional sense. Since <sup>V</sup> <sup>∈</sup> <sup>L</sup>∞(Ω), it is clear that <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω) belongs to dom <sup>T</sup>max if and only if Δ<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω), that is, the (regular) distribution Δ<sup>f</sup> is generated by an <sup>L</sup>2 function; cf. (8.2.20). Observe that <sup>H</sup>2(Ω) <sup>⊂</sup> dom <sup>T</sup>max, and it will also turn out that <sup>H</sup>2(Ω) = dom <sup>T</sup>max.

**Proposition 8.3.1.** Let T0, Tmin, and Tmax be the preminimal, minimal, and maximal operator associated with <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup><sup>2</sup>(Ω), respectively. Then <sup>T</sup><sup>0</sup> <sup>=</sup> <sup>T</sup>min, and

$$(T\_{\rm min})^\* = T\_{\rm max} \quad \text{and} \quad T\_{\rm min} = (T\_{\rm max})^\*.\tag{8.3.4}$$

Proof. It has already been shown above that T<sup>0</sup> = Tmin holds. In particular, this implies T <sup>∗</sup> <sup>0</sup> = (Tmin)<sup>∗</sup> and thus for the first identity in (8.3.4) it suffices to show T <sup>∗</sup> <sup>0</sup> = Tmax. Furthermore, since multiplication by V ∈ L∞(Ω) is a bounded operator in L<sup>2</sup>(Ω), it is no restriction to assume V = 0 in the following. Let f ∈ dom T <sup>∗</sup> <sup>0</sup> and consider T <sup>∗</sup> <sup>0</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω) as a distribution. Then one has for all ϕ ∈ C<sup>∞</sup> <sup>0</sup> (Ω) = dom T<sup>0</sup>

$$(T\_0^\*f)(\varphi) = (T\_0^\*f, \overline{\varphi})\_{L^2(\Omega)} = (f, T\_0\overline{\varphi})\_{L^2(\Omega)} = (f, -\Delta\overline{\varphi})\_{L^2(\Omega)} = (-\Delta f)(\varphi),$$

and hence −Δf = T <sup>∗</sup> <sup>0</sup> <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω). Thus, <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max and <sup>T</sup>max<sup>f</sup> <sup>=</sup> <sup>T</sup> <sup>∗</sup> <sup>0</sup> f. Conversely, for f ∈ dom Tmax and all ϕ ∈ C<sup>∞</sup> <sup>0</sup> (Ω) = dom T<sup>0</sup> one has

$$(T\_0\varphi, f)\_{L^2(\Omega)} = (-\Delta\varphi, f)\_{L^2(\Omega)} = (\varphi, -\Delta f)\_{L^2(\Omega)},$$

that is, f ∈ dom T <sup>∗</sup> <sup>0</sup> and T <sup>∗</sup> <sup>0</sup> f = −Δf = Tmaxf. Thus, the first identity in (8.3.4) has been shown. The second identity in (8.3.4) follows by taking adjoints. -

In the following the self-adjoint Dirichlet realization A<sup>D</sup> and the self-adjoint Neumann realization <sup>A</sup><sup>N</sup> of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω) will play an important role. The operators A<sup>D</sup> and A<sup>N</sup> will be introduced via the corresponding sesquilinear forms using the first representation theorem. More precisely, consider the densely defined forms

$$\mathfrak{t}\_{\mathsf{D}}[f,g] = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} + (Vf, g)\_{L^2(\Omega)}, \quad \operatorname{dom} \mathfrak{t}\_{\mathsf{D}} = H\_0^1(\Omega),$$

and

$$\mathfrak{tt}\_{\mathbb{N}}[f,g] = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} + (Vf, g)\_{L^2(\Omega)}, \quad \text{dom}\, \mathfrak{t}\_{\mathbb{N}} = H^1(\Omega),$$

in L2(Ω). It is easy to see that both forms are semibounded from below and that v<sup>−</sup> = essinf V is a lower bound. The same argument as in (8.3.1) using the Poincar´e inequality (8.2.11) on dom t<sup>D</sup> = H<sup>1</sup> <sup>0</sup> (Ω) implies the stronger statement that the form t<sup>D</sup> − v<sup>−</sup> is uniformly positive. Furthermore, it follows from the definitions that the form (∇·, ∇·)L2(Ω;Cn) defined on <sup>H</sup><sup>1</sup> <sup>0</sup> (Ω) or H1(Ω) is closed in <sup>L</sup>2(Ω); cf. Lemma 5.1.9. Since <sup>V</sup> <sup>∈</sup> <sup>L</sup>∞(Ω), it is clear that the form (<sup>V</sup> ·, ·)L2(Ω) is bounded on L2(Ω) and hence it follows from Theorem 5.1.16 that also the forms t<sup>D</sup> and t<sup>N</sup> are closed in L2(Ω). Therefore, by the first representation theorem (Theorem 5.1.18), there exist unique semibounded self-adjoint operators A<sup>D</sup> and A<sup>N</sup> in L2(Ω) associated with t<sup>D</sup> and tN, respectively, such that

$$(A\_{\mathcal{D}}f,g)\_{L^{2}(\Omega)} = \mathfrak{t}\_{\mathcal{D}}[f,g] \quad \text{for } f \in \text{dom}\,A\_{\mathcal{D}}, \ g \in \text{dom}\,\mathfrak{t}\_{\mathcal{D}}.$$

and

$$(A\_{\mathcal{N}}f,g)\_{L^{2}(\Omega)} = \mathfrak{t}\_{\mathcal{N}}[f,g] \quad \text{for} \ f \in \text{dom}\,A\_{\mathcal{N}}, \ g \in \text{dom}\,\mathfrak{t}\_{\mathcal{N}}.$$

The self-adjoint operators A<sup>D</sup> and A<sup>N</sup> are called the Dirichlet operator and Neumann operator, respectively. In the next propositions some properties of these operators are discussed.

**Proposition 8.3.2.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded domain. Then the Dirichlet operator A<sup>D</sup> is given by

$$\begin{aligned} A\_{\mathcal{D}}f &= -\Delta f + Vf, \\ \operatorname{dom} A\_{\mathcal{D}} &= \left\{ f \in H\_0^1(\Omega) : -\Delta f + Vf \in L^2(\Omega) \right\}, \end{aligned} \tag{8.3.5}$$

and for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AD) the resolvent (AD−λ)−<sup>1</sup> is a compact operator in <sup>L</sup>2(Ω). The Dirichlet operator A<sup>D</sup> coincides with the Friedrichs extension S<sup>F</sup> of the minimal operator Tmin in (8.3.2). In particular, AD−v<sup>−</sup> is uniformly positive. Furthermore, if <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain, then the Dirichlet operator <sup>A</sup><sup>D</sup> is given by

$$\begin{aligned} A\_{\mathcal{D}}f &= -\Delta f + Vf, \\ \text{dom}\, A\_{\mathcal{D}} &= \left\{ f \in H^1(\Omega) : -\Delta f + Vf \in L^2(\Omega), \,\tau\_{\mathcal{D}}^{(1)}f = 0 \right\}. \end{aligned} \tag{8.3.6}$$

Proof. Observe that for f ∈ dom A<sup>D</sup> and g ∈ C<sup>∞</sup> <sup>0</sup> (Ω) ⊂ dom t<sup>D</sup> one has

$$((A\_{\mathcal{D}}f,g)\_{L^2(\Omega)} = \mathfrak{t}\_{\mathcal{D}}[f,g] = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} + (Vf, g)\_{L^2(\Omega)} = \left(-\Delta f + Vf\right)(\overline{g}),$$

where −Δf + V f is viewed as a distribution. Since this identity holds for all g ∈ C<sup>∞</sup> <sup>0</sup> (Ω) and A<sup>D</sup> is an operator in L2(Ω) it follows that

$$-\Delta f + Vf = A\_{\mathbb{D}}f \in L^2(\Omega).$$

Therefore, <sup>A</sup><sup>D</sup> is given by (8.3.5). In the case that Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> has a <sup>C</sup>2-smooth boundary the form of the domain of A<sup>D</sup> in (8.3.6) follows from (8.3.5) and (8.2.17).

Next it will be shown that for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AD) the resolvent (A<sup>D</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a compact operator in L2(Ω). For this observe first that

$$(A\_{\rm D} - \lambda)^{-1} : L^2(\Omega) \to H\_0^1(\Omega), \quad \lambda \in \rho(A\_{\rm D}), \tag{8.3.7}$$

is everywhere defined and closed as an operator from L2(Ω) into H<sup>1</sup> <sup>0</sup> (Ω). In fact, if <sup>f</sup><sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>L</sup>2(Ω) and (A<sup>D</sup> <sup>−</sup> <sup>λ</sup>)−1f<sup>n</sup> <sup>→</sup> <sup>h</sup> in <sup>H</sup><sup>1</sup> <sup>0</sup> (Ω), then (A<sup>D</sup> <sup>−</sup> <sup>λ</sup>)−1f<sup>n</sup> <sup>→</sup> <sup>h</sup> in <sup>L</sup>2(Ω), and since the operator (A<sup>D</sup> <sup>−</sup>λ)−<sup>1</sup> is everywhere defined and continuous in <sup>L</sup>2(Ω), it is clear that (A<sup>D</sup> <sup>−</sup>λ)−1<sup>f</sup> <sup>=</sup> <sup>h</sup>. Hence, the operator in (8.3.7) is bounded by the closed graph theorem. By Rellich's theorem the embedding H<sup>1</sup> <sup>0</sup> (Ω) <sup>→</sup> <sup>L</sup>2(Ω) is compact, and it follows that (A<sup>D</sup> <sup>−</sup> <sup>λ</sup>)−1, <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AD), is a compact operator in L2(Ω).

It remains to verify that A<sup>D</sup> is the Friedrichs extension S<sup>F</sup> of Tmin = T<sup>0</sup> or, equivalenty, the Friedrichs extension of T0; cf. Lemma 5.3.1 and Definition 5.3.2. For this, consider the form tT<sup>0</sup> [f,g]=(T0f,g)L2(Ω), defined for f,g ∈ dom T0, and note that

$$\mathfrak{t}\_{T\_0}[f,g] = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} + (Vf, g)\_{L^2(\Omega)}, \quad \text{dom}\, \mathfrak{t}\_{T\_0} = C\_0^\infty(\Omega).$$

Observe that <sup>f</sup><sup>m</sup> <sup>→</sup><sup>t</sup>T<sup>0</sup> <sup>f</sup> if and only if <sup>f</sup><sup>m</sup> <sup>→</sup> <sup>f</sup> in <sup>L</sup><sup>2</sup>(Ω) and (∇fm) is a Cauchy sequence in <sup>L</sup><sup>2</sup>(Ω; <sup>C</sup><sup>n</sup>). Hence, <sup>f</sup><sup>m</sup> <sup>→</sup><sup>t</sup>T<sup>0</sup> <sup>f</sup> implies <sup>f</sup><sup>m</sup> <sup>→</sup> <sup>f</sup> in the norm of <sup>H</sup><sup>1</sup>(Ω), and so <sup>f</sup> <sup>∈</sup> <sup>H</sup><sup>1</sup> <sup>0</sup> (Ω) by (8.2.2). Therefore, by (5.1.16), the closure of the form t<sup>T</sup><sup>0</sup> is given by

$$\widetilde{\mathfrak{k}}\_{T\_0}[f,g] = \lim\_{m \to \infty} \mathfrak{k}\_{T\_0}[f\_m, g\_m] = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} + (Vf, g)\_{L^2(\Omega)},$$

where f,g <sup>∈</sup> <sup>H</sup><sup>1</sup> <sup>0</sup> (Ω) and <sup>f</sup><sup>m</sup> <sup>→</sup>tT<sup>0</sup> <sup>f</sup>, <sup>g</sup><sup>m</sup> <sup>→</sup>tT<sup>0</sup> <sup>g</sup>. Hence, tT<sup>0</sup> <sup>=</sup> <sup>t</sup>D, and since by Definition 5.3.2 the Friedrichs extension of T<sup>0</sup> is the unique self-adjoint operator corresponding to the closed form tT<sup>0</sup> , the assertion follows. -

In order to specify the Neumann operator AN, the first Green identity and the trace operators τ (1) <sup>D</sup> : <sup>H</sup>1(Ω) <sup>→</sup> <sup>H</sup>1/2(∂Ω) and <sup>τ</sup> (1) <sup>N</sup> : <sup>H</sup>1(Ω) <sup>→</sup> <sup>H</sup>−1/2(∂Ω) will be used; cf. Lemma 8.2.4. For this reason in the next proposition it is assumed that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain. It is also important to note that the Neumann operator A<sup>N</sup> below differs from the Kre˘ın–von Neumann extension and the Kre˘ın type extensions in Definition 5.4.2; cf. Section 8.5 for more details.

**Proposition 8.3.3.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain. Then the Neumann operator A<sup>N</sup> is given by

$$\begin{aligned} A\_{\mathcal{N}}f &= -\Delta f + Vf, \\ \text{dom}\, A\_{\mathcal{N}} &= \left\{ f \in H^1(\Omega) : -\Delta f + Vf \in L^2(\Omega), \,\tau\_{\mathcal{N}}^{(1)}f = 0 \right\}, \end{aligned} \tag{8.3.8}$$

and for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AN) the resolvent (A<sup>N</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a compact operator in <sup>L</sup>2(Ω).

Proof. In a first step it follows for <sup>f</sup> <sup>∈</sup> dom <sup>A</sup><sup>N</sup> <sup>⊂</sup> <sup>H</sup>1(Ω) and all <sup>g</sup> <sup>∈</sup> <sup>C</sup><sup>∞</sup> <sup>0</sup> (Ω) in the same way as in the proof of Proposition 8.3.2 that

$$(A\_\mathcal{N}f,g)\_{L^2(\Omega)} = \mathfrak{t}\_\mathcal{N}[f,g] = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} + (Vf, g)\_{L^2(\Omega)} = \left(-\Delta f + Vf\right)(\overline{g}),$$

and hence <sup>A</sup>N<sup>f</sup> = (−Δ + <sup>V</sup> )<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω). In particular, for <sup>f</sup> <sup>∈</sup> dom <sup>A</sup><sup>N</sup> one has <sup>f</sup> <sup>∈</sup> <sup>H</sup>1(Ω) and <sup>−</sup>Δ<sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω), so that Lemma 8.2.4 applies and yields

$$\begin{aligned} (\operatorname{A}(\operatorname{N}f,g)\_{L^{2}(\Omega)} &= \operatorname{tr}[f,g] \\ &= (\nabla f, \nabla g)\_{L^{2}(\Omega; \mathbb{C}^{n})} + (Vf, g)\_{L^{2}(\Omega)} \\ &= \left( (-\Delta + V)f, g \right)\_{L^{2}(\Omega)} + \left< \tau\_{\operatorname{N}}^{(1)}f, \tau\_{\operatorname{D}}^{(1)}g \right>\_{H^{-1/2}(\partial\Omega) \times H^{1/2}(\partial\Omega)} \end{aligned}$$

for all <sup>g</sup> <sup>∈</sup> dom <sup>t</sup><sup>N</sup> <sup>=</sup> <sup>H</sup>1(Ω). As <sup>A</sup>N<sup>f</sup> = (−Δ + <sup>V</sup> )f, one concludes that

$$\left\langle \tau\_{\mathrm{N}}^{(1)}f, \tau\_{\mathrm{D}}^{(1)}g \right\rangle\_{H^{-1/2}(\partial\Omega)\times H^{1/2}(\partial\Omega)} = 0 \quad \text{for all } g \in H^1(\Omega).$$

Since τ (1) <sup>D</sup> : <sup>H</sup>1(Ω) <sup>→</sup> <sup>H</sup>1/2(∂Ω) is surjective, it follows that <sup>τ</sup> (1) <sup>N</sup> f = 0. This implies the representation (8.3.8).

To show that the resolvent (A<sup>N</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> is a compact operator in <sup>L</sup><sup>2</sup>(Ω) one argues in the same way as in the proof of Proposition 8.3.2. In fact, the operator

$$(A\_{\mathcal{N}} - \lambda)^{-1} : L^2(\Omega) \to H^1(\Omega), \quad \lambda \in \rho(A\_{\mathcal{N}}),$$

is everywhere defined and closed, and hence bounded by the closed graph theorem. Since Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup><sup>2</sup>-domain, the embedding <sup>H</sup><sup>1</sup>(Ω) <sup>→</sup> <sup>L</sup><sup>2</sup>(Ω) is compact and this implies that (A<sup>N</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup>, <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AN), is a compact operator in <sup>L</sup><sup>2</sup>(Ω). -

It is known that functions f in dom A<sup>D</sup> or dom A<sup>N</sup> are locally H2-regular, that is, for every compact subset <sup>K</sup> <sup>⊂</sup> Ω the restriction of <sup>f</sup> to <sup>K</sup> is in <sup>H</sup>2(K). The next theorem is an important elliptic regularity result which ensures H2-regularity of the functions in dom A<sup>D</sup> or dom A<sup>N</sup> in (8.3.6) and (8.3.8), respectively, up to the boundary if the bounded domain Ω is C2-smooth in the sense of Definition 8.2.1.

**Theorem 8.3.4.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain. Then one has

$$A\_{\mathcal{D}}f = -\Delta f + Vf, \quad \text{dom}\, A\_{\mathcal{D}} = \left\{ f \in H^2(\Omega) : \tau\_{\mathcal{D}}f = 0 \right\},$$

and

$$A\_{\mathcal{N}}f = -\Delta f + Vf, \quad \text{dom}\, A\_{\mathcal{N}} = \left\{ f \in H^2(\Omega) : \tau\_{\mathcal{N}}f = 0 \right\}.$$

Note that under the assumptions in Theorem 8.3.4 the domain of the Dirichlet operator <sup>A</sup><sup>D</sup> is <sup>H</sup>2(Ω) <sup>∩</sup> <sup>H</sup><sup>1</sup> <sup>0</sup> (Ω); cf. (8.2.17). The direct sum decompositions in the next corollary follow immediately from Theorem 1.7.1 when considering the operator <sup>T</sup> <sup>=</sup> <sup>−</sup>Δ + <sup>V</sup> , dom <sup>T</sup> <sup>=</sup> <sup>H</sup>2(Ω), and taking into account that <sup>A</sup><sup>D</sup> <sup>⊂</sup> <sup>T</sup> and A<sup>N</sup> ⊂ T.

**Corollary 8.3.5.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain and denote by <sup>τ</sup><sup>D</sup> : <sup>H</sup>2(Ω) <sup>→</sup> <sup>H</sup>3/2(∂Ω) and <sup>τ</sup><sup>N</sup> : <sup>H</sup>2(Ω) <sup>→</sup> <sup>H</sup>1/2(∂Ω) the Dirichlet and Neumann trace operator in (8.2.13) and (8.2.14), respectively. Then for λ ∈ ρ(AD) one has the direct sum decomposition

$$\begin{split}H^2(\Omega) &= \text{dom}\,A\_{\text{D}} + \left\{ f\_{\lambda} \in H^2(\Omega) : (-\Delta + V)f\_{\lambda} = \lambda f\_{\lambda} \right\} \\ &= \text{ker}\,\tau\_{\text{D}} + \left\{ f\_{\lambda} \in H^2(\Omega) : (-\Delta + V)f\_{\lambda} = \lambda f\_{\lambda} \right\},\end{split} \tag{8.3.9}$$

and for λ ∈ ρ(AN) one has the direct sum decomposition

$$\begin{split}H^2(\Omega) &= \text{dom}\,A\_N + \left\{ f\_\lambda \in H^2(\Omega) : (-\Delta + V)f\_\lambda = \lambda f\_\lambda \right\} \\ &= \ker \tau\_N + \left\{ f\_\lambda \in H^2(\Omega) : (-\Delta + V)f\_\lambda = \lambda f\_\lambda \right\}.\end{split} \tag{8.3.10}$$

As a consequence of the decomposition (8.3.9) in Corollary 8.3.5 and (8.2.12) one concludes that the so-called Dirichlet-to-Neumann map in the next definition is a well-defined operator from H3/2(∂Ω) into H1/2(∂Ω).

**Definition 8.3.6.** Let Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain, let <sup>A</sup><sup>D</sup> be the self-adjoint Dirichlet operator, and let <sup>τ</sup><sup>D</sup> : <sup>H</sup>2(Ω) <sup>→</sup> <sup>H</sup>3/2(∂Ω) and <sup>τ</sup><sup>N</sup> : <sup>H</sup>2(Ω) <sup>→</sup> <sup>H</sup>1/2(∂Ω)

.

be the Dirichlet and Neumann trace operator in (8.2.13) and (8.2.14), respectively. For λ ∈ ρ(AD) the Dirichlet-to-Neumann map is defined as

$$D(\lambda): H^{3/2}(\partial\Omega) \to H^{1/2}(\partial\Omega), \qquad \tau\_{\mathcal{D}}f\_{\lambda} \mapsto \tau\_{\mathcal{N}}f\_{\lambda},$$

where <sup>f</sup><sup>λ</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup>(Ω) is such that (−Δ + <sup>V</sup> )f<sup>λ</sup> <sup>=</sup> λfλ.

Note that for λ ∈ ρ(AD) ∩ ρ(AN) both decompositions (8.3.9) and (8.3.10) in Corollary 8.3.5 hold and together with (8.2.12) this implies that the Dirichletto-Neumann map D(λ) is a bijective operator from H3/2(∂Ω) onto H1/2(∂Ω).

A further useful consequence of Theorem 8.3.4 is given by the following a priori estimates.

**Corollary 8.3.7.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain and let <sup>A</sup><sup>D</sup> and <sup>A</sup><sup>N</sup> be the Dirichlet and Neumann operator, respectively. Then there exist constants C<sup>D</sup> > 0 and C<sup>N</sup> > 0 such that

$$\|f\|\_{H^2(\Omega)} \le C\_\mathcal{D} \left( \|f\|\_{L^2(\Omega)} + \|A\_\mathcal{D} f\|\_{L^2(\Omega)} \right), \quad f \in \text{dom}\, A\_\mathcal{D},$$

and

$$\|g\|\_{H^2(\Omega)} \le C\_N \left( \|g\|\_{L^2(\Omega)} + \|A\_N g\|\_{L^2(\Omega)} \right), \quad g \in \text{dom}\, A\_N.$$

Proof. One verifies in the same way as in the proof of Proposition 8.3.2 that for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AD) the operator (A<sup>D</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> : <sup>L</sup>2(Ω) <sup>→</sup> <sup>H</sup>2(Ω) is everywhere defined and closed, and hence bounded. For <sup>f</sup> <sup>∈</sup> dom <sup>A</sup><sup>D</sup> choose <sup>h</sup> <sup>∈</sup> <sup>L</sup>2(Ω) such that <sup>f</sup> = (A<sup>D</sup> <sup>−</sup> <sup>λ</sup>)−1h. Then

$$\|f\|\_{H^2(\Omega)} = \|(A\_\mathcal{D} - \lambda)^{-1}h\|\_{H^2(\Omega)} \le C\|h\|\_{L^2(\Omega)} = C\_\mathcal{D} \|(A\_\mathcal{D} - \lambda)f\|\_{L^2(\Omega)}$$

for some C > 0 and C<sup>D</sup> > 0, and the first estimate follows. The second estimate is proved in the same way. -

The next lemma is an important ingredient in the following.

**Lemma 8.3.8.** Let Tmax be the maximal operator associated to −Δ + V in (8.3.3). Then the space C∞(Ω) is dense in dom Tmax with respect to the graph norm.

Proof. Since V ∈ L∞(Ω) is bounded, the graph norms

$$\left(\|\cdot\|\|\_{L^2(\Omega)}^2 + \|T\_{\text{max}}\cdot\|\_{L^2(\Omega)}^2\right)^{1/2} \quad \text{and} \quad \left(\|\cdot\|\_{L^2(\Omega)}^2 + \|\Delta\cdot\|\_{L^2(\Omega)}^2\right)^{1/2}$$

are equivalent on dom Tmax, and hence it is no restriction to assume that V = 0. Now suppose that f ∈ dom Tmax is such that for all g ∈ C∞(Ω)

$$0 = (f, g)\_{L^2(\Omega)} + (\Delta f, \Delta g)\_{L^2(\Omega)}.\tag{8.3.11}$$

Then (8.3.11) holds for all g ∈ C<sup>∞</sup> <sup>0</sup> (Ω), so that 0 = (f + Δ2f)(g), where f + Δ2f is viewed as a distribution. As <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Ω), one concludes that

$$
\Delta^2 f = -f \in L^2(\Omega). \tag{8.3.12}
$$

Next it will be shown that

$$
\Delta f \in H\_0^2(\Omega). \tag{8.3.13}
$$

In fact, choose an open ball B such that Ω ⊂ B and let h ∈ C<sup>∞</sup> <sup>0</sup> (B). Let

$$
\tilde{A}\_\mathcal{D} = -\Delta,\qquad \text{dom}\,\tilde{A}\_\mathcal{D} = H^2(B) \cap H\_0^1(B),
$$

be the self-adjoint Dirichlet Laplacian in L<sup>2</sup>(B); cf. Theorem 8.3.4. Since B is bounded, one has 0 <sup>∈</sup> <sup>ρ</sup>(A<sup>D</sup>) by Proposition 8.3.2. As <sup>h</sup> <sup>∈</sup> <sup>C</sup><sup>∞</sup> <sup>0</sup> (B), elliptic regularity yields <sup>A</sup>−<sup>1</sup> <sup>D</sup> <sup>h</sup> <sup>∈</sup> <sup>C</sup>∞(B) and hence (A−<sup>1</sup> <sup>D</sup> h)|<sup>Ω</sup> ∈ C∞(Ω) for the restriction onto Ω. Denote by f and <sup>Δ</sup> <sup>5</sup><sup>f</sup> the extension of <sup>f</sup> and Δ<sup>f</sup> by zero to <sup>B</sup>. Then it follows with the help of (8.3.11) that

$$\begin{aligned} (\widetilde{A}\_{\mathcal{D}}^{-1}\widetilde{f},h)\_{L^2(B)} &= (\widetilde{f},\widetilde{A}\_{\mathcal{D}}^{-1}h)\_{L^2(B)} \\ &= \left(f,(\widetilde{A}\_{\mathcal{D}}^{-1}h)|\_{\Omega}\right)\_{L^2(\Omega)} \\ &= -\left(\Delta f,\Delta(\widetilde{A}\_{\mathcal{D}}^{-1}h)|\_{\Omega}\right)\_{L^2(\Omega)} \\ &= (-\widetilde{\Delta f},h)\_{L^2(B)} \end{aligned}$$

holds for h ∈ C<sup>∞</sup> <sup>0</sup> (B). This yields −Δ <sup>5</sup><sup>f</sup> <sup>=</sup> <sup>A</sup>−<sup>1</sup> <sup>D</sup> f <sup>∈</sup> <sup>H</sup>2(B). Moreover, as <sup>Δ</sup> 5f vanishes outside of Ω it follows that Δ<sup>f</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup> <sup>0</sup> (Ω), that is, (8.3.13) holds.

Now choose a sequence (ψk) ⊂ C<sup>∞</sup> <sup>0</sup> (Ω) such that <sup>ψ</sup><sup>k</sup> <sup>→</sup> <sup>Δ</sup><sup>f</sup> in <sup>H</sup>2(Ω). Then, by (8.3.12),

$$\begin{aligned} 0 \le (\Delta f, \Delta f)\_{L^2(\Omega)} &= \lim\_{k \to \infty} (\psi\_k, \Delta f)\_{L^2(\Omega)} = \lim\_{k \to \infty} (\Delta \psi\_k, f)\_{L^2(\Omega)}, \\ &= (\Delta^2 f, f)\_{L^2(\Omega)} = -(f, f)\_{L^2(\Omega)} \le 0, \end{aligned}$$

that is, f = 0 in (8.3.11). Hence, C∞(Ω) is dense in dom Tmax with respect to the graph norm. -

The following result on the extension of the Dirichlet trace operator onto dom Tmax is essential for the construction of a boundary triplet for Tmax.

**Theorem 8.3.9.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain. Then the Dirichlet trace operator <sup>τ</sup><sup>D</sup> : <sup>H</sup>2(Ω) <sup>→</sup> <sup>H</sup>3/2(∂Ω) in (8.2.13) admits a unique extension to a continuous surjective operator

$$
\widetilde{\tau}\_{\mathcal{D}} : \operatorname{dom} T\_{\max} \to H^{-1/2}(\partial \Omega),
$$

where dom Tmax is equipped with the graph norm. Furthermore,

$$\ker \widetilde{\tau}\_{\mathcal{D}} = \ker \tau\_{\mathcal{D}} = \text{dom}\, A\_{\mathcal{D}}.$$

Proof. In the following fix λ ∈ ρ(AD) and consider the operator

$$\Upsilon := -\tau\_{\mathcal{N}} (A\_{\mathcal{D}} - \overline{\lambda})^{-1} : L^2(\Omega) \to H^{1/2}(\partial \Omega). \tag{8.3.14}$$

Since (A<sup>D</sup> <sup>−</sup> <sup>λ</sup>)−<sup>1</sup> : <sup>L</sup><sup>2</sup>(Ω) <sup>→</sup> <sup>H</sup><sup>2</sup>(Ω) is everywhere defined and closed, it is clear that (A<sup>D</sup> <sup>−</sup>λ)−<sup>1</sup> : <sup>L</sup><sup>2</sup>(Ω) <sup>→</sup> <sup>H</sup><sup>2</sup>(Ω) is continuous and maps onto dom <sup>A</sup>D. Hence, it follows from Theorem 8.3.4 and (8.2.12) that Υ <sup>∈</sup> **<sup>B</sup>**(L<sup>2</sup>(Ω), H<sup>1</sup>/<sup>2</sup>(∂Ω)) in (8.3.14) is a surjective operator.

Next it will be shown that

$$\ker \Upsilon = \mathfrak{M} \lambda (T\_{\text{max}})^\perp,\tag{8.3.15}$$

where Nλ(Tmax) = ker (Tmax−λ). In fact, for the inclusion (⊂) in (8.3.15), assume that

$$\Upsilon h = -\tau\_{\mathbb{N}} (A\_{\mathbb{D}} - \overline{\lambda})^{-1} h = 0$$

for some <sup>h</sup> <sup>∈</sup> <sup>L</sup>2(Ω). Then it follows from Theorem 8.3.4 that

$$(A\_{\mathcal{D}} - \overline{\lambda})^{-1} h \in \text{dom}\, A\_{\mathcal{D}} \cap \text{dom}\, A\_{\mathcal{N}}$$

and hence (A<sup>D</sup> <sup>−</sup> <sup>λ</sup>)−1<sup>h</sup> <sup>∈</sup> dom <sup>T</sup>min by (8.2.15) and (8.3.2). For <sup>f</sup><sup>λ</sup> <sup>∈</sup> <sup>N</sup>λ(Tmax) one concludes, together with Proposition 8.3.1, that

$$\begin{aligned} (f\_{\lambda}, h)\_{L^2(\Omega)} &= \left( f\_{\lambda}, (T\_{\text{min}} - \overline{\lambda})(A\_{\text{D}} - \overline{\lambda})^{-1} h \right)\_{L^2(\Omega)} \\ &= \left( (T\_{\text{max}} - \lambda) f\_{\lambda}, (A\_{\text{D}} - \overline{\lambda})^{-1} h \right)\_{L^2(\Omega)} \\ &= 0, \end{aligned}$$

which shows h ∈ Nλ(Tmax)⊥. For the inclusion (⊃) in (8.3.15), let h ∈ Nλ(Tmax)⊥. Then <sup>h</sup> <sup>∈</sup> ran (Tmin <sup>−</sup> <sup>λ</sup>), and hence there exists <sup>k</sup> <sup>∈</sup> dom <sup>T</sup>min <sup>=</sup> <sup>H</sup><sup>2</sup> <sup>0</sup> (Ω) such that h = (Tmin − λ)k. It follows that

$$\Upsilon h = -\tau \mathbf{\dot{n}} (A \mathbf{p} - \overline{\boldsymbol{\lambda}})^{-1} h = -\tau \mathbf{\dot{n}} (A \mathbf{p} - \overline{\boldsymbol{\lambda}})^{-1} (T\_{\text{min}} - \overline{\boldsymbol{\lambda}}) k = -\tau \mathbf{\dot{n}} k = 0, \mathbf{p}$$

which shows that h ∈ ker Υ. This completes the proof of (8.3.15).

From (8.3.14) and (8.3.15) it follows that the restriction of Υ to Nλ(Tmax) is an isomorphism from Nλ(Tmax) onto H1/2(∂Ω). This implies that the dual operator

$$T': H^{-1/2}(\partial\Omega) \to L^2(\Omega) \tag{8.3.16}$$

is bounded and invertible, and by the closed range theorem (see Theorem 1.3.5 for the Hilbert space adjoint) one has

$$
\tan \Upsilon' = (\ker \Upsilon)^\perp = \mathfrak{N}\_\lambda (T\_{\max}) .
$$

The inverse (Υ- )−<sup>1</sup> is regarded as an isomorphism from Nλ(Tmax) onto H−1/2(∂Ω). Now recall the direct sum decomposition

$$\operatorname{dom} T\_{\max} = \operatorname{dom} A\_{\mathrm{D}} + \mathfrak{N}\_{\lambda}(T\_{\max})$$

from Theorem 1.7.1 or Corollary 1.7.5, and write the elements f ∈ dom Tmax accordingly,

$$f = f\_{\mathcal{D}} + f\_{\lambda}, \quad f\_{\mathcal{D}} \in \text{dom}\, A\_{\mathcal{D}}, \quad f\_{\lambda} \in \mathfrak{N}\_{\lambda}(T\_{\text{max}}).$$

Define the mapping

$$
\widetilde{\tau}\_{\mathcal{D}} : \operatorname{dom} T\_{\max} \to H^{-1/2}(\partial \Omega), \qquad f \mapsto \widetilde{\tau}\_{\mathcal{D}} f = (\Upsilon')^{-1} f\_{\lambda}. \tag{8.3.17}
$$

Next it will be shown that <sup>τ</sup><sup>D</sup> is an extension of the Dirichlet trace operator <sup>τ</sup><sup>D</sup> : <sup>H</sup><sup>2</sup>(Ω) <sup>→</sup> <sup>H</sup><sup>3</sup>/<sup>2</sup>(∂Ω). For this, consider <sup>ϕ</sup> <sup>∈</sup> ran <sup>τ</sup><sup>D</sup> <sup>=</sup> <sup>H</sup><sup>3</sup>/<sup>2</sup>(∂Ω) <sup>⊂</sup> <sup>H</sup>−1/<sup>2</sup>(∂Ω) and note that by (8.3.9) and (8.2.12) there exists a unique <sup>f</sup><sup>λ</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup>(Ω) such that

$$(-\Delta + V)f\_{\lambda} = \lambda f\_{\lambda} \quad \text{and} \quad \tau\_{\mathcal{D}}f\_{\lambda} = \varphi. \tag{8.3.18}$$

Let <sup>h</sup> <sup>∈</sup> <sup>L</sup>2(Ω) and set <sup>k</sup> := (A<sup>D</sup> <sup>−</sup>λ)−1h. Then, by (8.3.14), the fact that <sup>τ</sup>D<sup>k</sup> = 0, and the second Green identity (8.2.19),

$$\begin{split} \left(\Upsilon^{\prime}\varphi,h\right)\_{L^{2}(\Omega)} &= \left(\varphi,\Upsilon h\right)\_{H^{-1/2}(\partial\Omega)\times H^{1/2}(\partial\Omega)} \\ &= \left(\varphi,\Upsilon h\right)\_{L^{2}(\partial\Omega)} \\ &= -\left(\varphi,\tau\_{\mathcal{N}}(A\_{\mathcal{D}}-\overline{\lambda})^{-1}h\right)\_{L^{2}(\partial\Omega)} \\ &= -\left(\tau\_{\mathcal{D}}f\_{\lambda},\tau\_{\mathcal{N}}k\right)\_{L^{2}(\partial\Omega)} + \left(\tau\_{\mathcal{N}}f\_{\lambda},\tau\_{\mathcal{D}}k\right)\_{L^{2}(\partial\Omega)} \\ &= -\left((-\Delta+V)f\_{\lambda},k\right)\_{L^{2}(\Omega)} + \left(f\_{\lambda},(-\Delta+V)k\right)\_{L^{2}(\Omega)} \\ &= -\left(\lambda f\_{\lambda},k\right)\_{L^{2}(\Omega)} + \left(f\_{\lambda},A\_{\mathcal{D}}k\right)\_{L^{2}(\Omega)} \\ &= \left(f\_{\lambda},\left(A\_{\mathcal{D}}-\overline{\lambda}\right)k\right)\_{L^{2}(\Omega)} \\ &= \left(f\_{\lambda},h\right)\_{L^{2}(\Omega)}, \end{split}$$

and thus Υ- ϕ = fλ. Hence, the restriction of Υ to <sup>H</sup>3/2(∂Ω) maps <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>3/2(∂Ω) to the unique H2(Ω)-solution f<sup>λ</sup> of the boundary value problem (8.3.18), that is, to the unique element <sup>f</sup><sup>λ</sup> <sup>∈</sup> <sup>N</sup>λ(Tmax) <sup>∩</sup> <sup>H</sup>2(Ω) such that <sup>τ</sup>Df<sup>λ</sup> <sup>=</sup> <sup>ϕ</sup>. Therefore, (Υ- )−<sup>1</sup> maps the elements in <sup>N</sup>λ(Tmax) <sup>∩</sup> <sup>H</sup>2(Ω) onto their Dirichlet boundary values, that is,

$$(\Upsilon')^{-1}f\_{\lambda} = \tau\_{\mathcal{D}}f\_{\lambda} \quad \text{for} \quad f\_{\lambda} \in \mathfrak{N}\_{\lambda}(T\_{\text{max}}) \cap H^2(\Omega).$$

By definition <sup>τ</sup>Df<sup>D</sup> =0= <sup>τ</sup>Df<sup>D</sup> for <sup>f</sup><sup>D</sup> <sup>∈</sup> dom <sup>A</sup>D. Therefore, if <sup>f</sup> <sup>∈</sup> <sup>H</sup>2(Ω) is decomposed according to (8.3.9) as

$$f = f\_{\mathcal{D}} + f\_{\lambda}, \quad f\_{\mathcal{D}} \in \text{dom}\, A\_{\mathcal{D}}, \quad f\_{\lambda} \in \mathfrak{N}\_{\lambda}(T\_{\text{max}}) \cap H^2(\Omega),$$

then

$$
\widetilde{\tau}\_{\mathcal{D}}f = \widetilde{\tau}\_{\mathcal{D}}(f\_{\mathcal{D}} + f\_{\lambda}) = (\Upsilon')^{-1}f\_{\lambda} = \tau\_{\mathcal{D}}f\_{\lambda} = \tau\_{\mathcal{D}}f,
$$

so that <sup>τ</sup><sup>D</sup> in (8.3.17) is an extension of <sup>τ</sup>D. Note that by construction <sup>τ</sup><sup>D</sup> is surjective. Furthermore, the property ker <sup>τ</sup><sup>D</sup> = ker <sup>τ</sup><sup>D</sup> is clear from the definition.

It remains to show that <sup>τ</sup><sup>D</sup> in (8.3.17) is continuous with respect to the graph norm on dom Tmax. For this, consider f = f<sup>D</sup> + f<sup>λ</sup> ∈ dom Tmax with f<sup>D</sup> ∈ dom A<sup>D</sup> and f<sup>λ</sup> ∈ Nλ(Tmax), and note that

$$\begin{aligned} f\_{\lambda} &= f - f\_{\text{D}} = f - (A\_{\text{D}} - \lambda)^{-1} (T\_{\text{max}} - \lambda) f\_{\text{D}}, \\ &= f - (A\_{\text{D}} - \lambda)^{-1} (T\_{\text{max}} - \lambda) f. \end{aligned}$$

Since (Υ- )−<sup>1</sup> : <sup>N</sup>λ(Tmax) <sup>→</sup> <sup>H</sup>−1/<sup>2</sup>(∂Ω) is an isomorphism and hence, in particular, bounded, one has

$$\begin{aligned} \|\tilde{\mathbb{T}}f\|\_{H^{-1/2}(\partial\Omega)} &= \| (\Upsilon')^{-1} f\_{\lambda} \|\_{H^{-1/2}(\partial\Omega)} \\ &\leq C \| f\_{\lambda} \|\_{L^2(\Omega)} \\ &\leq C \Big( \| f \|\_{L^2(\Omega)} + \| (A\_{\mathbb{D}} - \lambda)^{-1} (T\_{\max} - \lambda) f \|\_{L^2(\Omega)} \Big) \\ &\leq C' \Big( \| f \|\_{L^2(\Omega)} + \| (T\_{\max} - \lambda) f \|\_{L^2(\Omega)} \Big) \\ &\leq C'' \Big( \| f \|\_{L^2(\Omega)} + \| T\_{\max} f \|\_{L^2(\Omega)} \Big) \end{aligned}$$

with some constants C, C- , C-- <sup>&</sup>gt; 0. Thus, <sup>τ</sup><sup>D</sup> is continuous. The proof of Theorem 8.3.9 is complete. -

The following result is parallel to Theorem 8.3.9 and can be proved in a similar way.

**Theorem 8.3.10.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain. Then the Neumann trace operator <sup>τ</sup><sup>N</sup> : <sup>H</sup>2(Ω) <sup>→</sup> <sup>H</sup>1/2(∂Ω) in (8.2.14) admits a unique extension to a continuous surjective operator

$$
\tilde{\tau}\_{\mathcal{N}} : \operatorname{dom} T\_{\max} \to H^{-3/2}(\partial \Omega),
$$

where dom Tmax is equipped with the graph norm. Furthermore,

$$\ker \widetilde{\tau}\_{\mathcal{N}} = \ker \tau\_{\mathcal{N}} = \text{dom}\, A\_{\mathcal{N}}.$$

As a consequence of Theorem 8.3.9 and Theorem 8.3.10 one can also extend the second Green identity in (8.2.19) to elements <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max and <sup>g</sup> <sup>∈</sup> <sup>H</sup>2(Ω).

**Corollary 8.3.11.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain, and let

<sup>τ</sup><sup>D</sup> : dom <sup>T</sup>max <sup>→</sup> <sup>H</sup>−1/2(∂Ω) and <sup>τ</sup><sup>N</sup> : dom <sup>T</sup>max <sup>→</sup> <sup>H</sup>−3/2(∂Ω)

be the unique continuous extensions of the Dirichlet and Neumann trace operators

$$\tau\_{\mathsf{D}} : H^2(\Omega) \to H^{3/2}(\partial \Omega) \quad \text{and} \quad \tau\_{\mathsf{N}} : H^2(\Omega) \to H^{1/2}(\partial \Omega).$$

from Theorem 8.3.9 and Theorem 8.3.10, respectively. Then the second Green identity in (8.2.19) extends to

$$\begin{aligned} & \langle (T\_{\text{max}}f, g)\_{L^2(\Omega)} - (f, T\_{\text{max}}g)\_{L^2(\Omega)} \\ &= \langle \widetilde{\tau}\_{\text{D}}f, \tau\_{\text{N}}g \rangle\_{H^{-1/2}(\partial\Omega) \times H^{1/2}(\partial\Omega)} - \langle \widetilde{\tau}\_{\text{N}}f, \tau\_{\text{D}}g \rangle\_{H^{-3/2}(\partial\Omega) \times H^{3/2}(\partial\Omega)} \end{aligned}$$

for <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max and <sup>g</sup> <sup>∈</sup> <sup>H</sup>2(Ω).

Proof. Let <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max and <sup>g</sup> <sup>∈</sup> <sup>H</sup>2(Ω). Since <sup>C</sup>∞(Ω) is dense in dom <sup>T</sup>max with respect to the graph norm by Lemma 8.3.8 and <sup>C</sup>∞(Ω) <sup>⊂</sup> <sup>H</sup>2(Ω) <sup>⊂</sup> dom <sup>T</sup>max, there exists a sequence (fn) <sup>⊂</sup> <sup>H</sup>2(Ω) such that <sup>f</sup><sup>n</sup> <sup>→</sup> <sup>f</sup> and <sup>T</sup>maxf<sup>n</sup> <sup>→</sup> <sup>T</sup>max<sup>f</sup>

in <sup>L</sup><sup>2</sup>(Ω). Moreover, <sup>τ</sup>Df<sup>n</sup> <sup>→</sup> <sup>τ</sup><sup>D</sup><sup>f</sup> in <sup>H</sup>−1/<sup>2</sup>(∂Ω) and <sup>τ</sup>Nf<sup>n</sup> <sup>→</sup> <sup>τ</sup><sup>N</sup><sup>f</sup> in <sup>H</sup>−3/<sup>2</sup>(∂Ω), because <sup>τ</sup><sup>D</sup> and <sup>τ</sup><sup>N</sup> are continuous with respect to the graph norm. Therefore, with the help of the second Green identity (8.2.19), one concludes that

$$\begin{split} & \quad (T\_{\text{max}}f,g)\_{L^{2}(\Omega)} - (f,T\_{\text{max}}g)\_{L^{2}(\Omega)} \\ & \quad = \lim\_{n \to \infty} (T\_{\text{max}}f\_{n},g)\_{L^{2}(\Omega)} - \lim\_{n \to \infty} (f\_{n},T\_{\text{max}}g)\_{L^{2}(\Omega)} \\ & \quad = \lim\_{n \to \infty} \left[ \left( \tau\_{\text{D}}f\_{n},\tau\_{\text{N}}g \right)\_{L^{2}(\partial\Omega)} - \left( \tau\_{\text{N}}f\_{n},\tau\_{\text{D}}g \right)\_{L^{2}(\partial\Omega)} \right] \\ & \quad = \lim\_{n \to \infty} \left[ \left< \tau\_{\text{D}}f\_{n},\tau\_{\text{N}}g \right>\_{H^{-1/2}(\partial\Omega) \times H^{1/2}(\partial\Omega)} - \left< \tau\_{\text{N}}f\_{n},\tau\_{\text{D}}g \right>\_{H^{-3/2}(\partial\Omega) \times H^{3/2}(\partial\Omega)} \right] \\ & \quad = \left< \widetilde{\tau}\_{\text{D}}f,\tau\_{\text{N}}g \right>\_{H^{-1/2}(\partial\Omega) \times H^{1/2}(\partial\Omega)} - \left< \widetilde{\tau}\_{\text{N}}f,\tau\_{\text{D}}g \right>\_{H^{-3/2}(\partial\Omega) \times H^{3/2}(\partial\Omega)}, \end{split}$$

which completes the proof. -

Note that, by construction, there exists a bounded right inverse for the extended Dirichlet trace operator <sup>τ</sup><sup>D</sup> (see (8.3.16)–(8.3.17)) and similarly there exists a bounded right inverse for the extended Neumann trace operator <sup>τ</sup>N. This also implies that the Dirichlet-to-Neumann map in Definition 8.3.6 admits a natural extension to a bounded mapping from from H−1/2(∂Ω) into H−3/2(∂Ω).

**Corollary 8.3.12.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain and let <sup>τ</sup><sup>D</sup> and <sup>τ</sup><sup>N</sup> be the unique continuous extensions of the Dirichlet and Neumann trace operators from Theorem 8.3.9 and Theorem 8.3.10, respectively. Then for λ ∈ ρ(AD) the Dirichlet-to-Neumann map in Definition 8.3.6 admits an extension to a bounded operator

$$
\tilde{D}(\lambda) : H^{-1/2}(\partial \Omega) \to H^{-3/2}(\partial \Omega), \qquad \tilde{\tau}\_{\mathcal{D}} f\_{\lambda} \mapsto \tilde{\tau}\_{\mathcal{N}} f\_{\lambda},
$$

where f<sup>λ</sup> ∈ Nλ(Tmax).

For later purposes the following fact is provided.

**Proposition 8.3.13.** The minimal operator Tmin in (8.3.2) is simple.

Proof. Since A<sup>D</sup> is a self-adjoint extension of Tmin with discrete spectrum, it suffices to check that Tmin has no eigenvalues; cf. Proposition 3.4.8. For this, assume that <sup>T</sup>min<sup>f</sup> <sup>=</sup> λf for some <sup>λ</sup> <sup>∈</sup> <sup>R</sup> and some <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>min. Since dom <sup>T</sup>min <sup>=</sup> <sup>H</sup><sup>2</sup> <sup>0</sup> (Ω), there exist (fk) ∈ C<sup>∞</sup> <sup>0</sup> (Ω) such that <sup>f</sup><sup>k</sup> <sup>→</sup> <sup>f</sup> in <sup>H</sup>2(Ω). Denote the zero extensions of f and f<sup>k</sup> to all of R<sup>n</sup> by f and <sup>f</sup> <sup>k</sup>, respectively. Then f <sup>k</sup> → f in <sup>L</sup>2(Rn) and for all h ∈ C<sup>∞</sup> <sup>0</sup> (Rn) and <sup>α</sup> <sup>∈</sup> <sup>N</sup><sup>n</sup> <sup>0</sup> such that |α| ≤ 2 one computes

$$\begin{aligned} \int\_{\mathbb{R}^n} \tilde{f}(x) D^\alpha h(x) dx &= \lim\_{k \to \infty} \int\_{\mathbb{R}^n} \tilde{f}\_k(x) D^\alpha h(x) dx \\ &= (-1)^{|\alpha|} \lim\_{k \to \infty} \int\_{\mathbb{R}^n} (D^\alpha \tilde{f}\_k)(x) h(x) dx \\ &= (-1)^{|\alpha|} \lim\_{k \to \infty} \int\_{\Omega} (D^\alpha f\_k)(x) h(x) dx \end{aligned}$$

$$\begin{aligned} &=(-1)^{|\alpha|} \int\_{\Omega} (D^{\alpha}f)(x)h(x)dx \\ &=(-1)^{|\alpha|} \int\_{\mathbb{R}^n} \widetilde{(D^{\alpha}f)}(x)h(x)dx,\end{aligned}$$

where ( D<sup>α</sup>f) denotes the zero extension of <sup>D</sup><sup>α</sup><sup>f</sup> to all of <sup>R</sup><sup>n</sup>. It follows from this computation that

$$D^{\alpha}\widetilde{f} = \widetilde{(D^{\alpha}f)} \in L^{2}(\mathbb{R}^{n}), \qquad |\alpha| \le 2,$$

and hence f <sup>∈</sup> <sup>H</sup>2(Rn). Furthermore, if <sup>V</sup> <sup>∈</sup> <sup>L</sup>∞(Rn) denotes some real extension of <sup>V</sup> , then (−Δ + <sup>V</sup>)<sup>f</sup> <sup>=</sup> λf and since <sup>f</sup> vanishes on an open subset of <sup>R</sup>n, the unique continuation principle (see, e.g., [652, Theorem XIII.63]) implies f = 0, so that f = 0. Therefore, Tmin has no eigenvalues and now Proposition 3.4.8 shows that Tmin is simple. -

## **8.4 A boundary triplet for the maximal Schr¨odinger operator**

In this section a boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} for the maximal operator <sup>T</sup>max in (8.3.3) is provided under the assumption that Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded <sup>C</sup>2-domain. The corresponding Weyl function is closely connected to the extended Dirichletto-Neumann map in Corollary 8.3.12. As examples, Neumann and Robin type boundary conditions are discussed, and it is also explained that there exist selfadjoint realizations of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω) which are not semibounded and which may have essential spectrum of rather arbitrary form.

Recall from Corollary 8.2.2 that

$$\{H^{1/2}(\partial\Omega), L^2(\partial\Omega), H^{-1/2}(\partial\Omega)\}$$

is a Gelfand triple and there exist isometric isomorphisms <sup>ι</sup><sup>±</sup> :H<sup>±</sup> <sup>1</sup> <sup>2</sup> (∂Ω)→L2(∂Ω) such that

$$\langle \varphi, \psi \rangle\_{H^{-1/2}(\partial \Omega) \times H^{1/2}(\partial \Omega)} = (\iota\_- \varphi, \iota\_+ \psi)\_{L^2(\partial \Omega)}$$

holds for all <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>−1/2(∂Ω) and <sup>ψ</sup> <sup>∈</sup> <sup>H</sup>1/2(∂Ω). For the definition of the boundary mappings in the next proposition recall also the definition and the properties of the Dirichlet operator A<sup>D</sup> (see Theorem 8.3.4), as well as the direct sum decomposition

$$\text{dom}\,T\_{\text{max}} = \text{dom}\,A\_{\text{D}} + \mathfrak{N}\_{\eta}(T\_{\text{max}}),\tag{8.4.1}$$

which holds for all η ∈ ρ(AD). In particular, since A<sup>D</sup> is semibounded from below, one may choose <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(AD) <sup>∩</sup> <sup>R</sup> in (8.4.1). Further, let <sup>τ</sup><sup>N</sup> : <sup>H</sup>2(Ω) <sup>→</sup> <sup>H</sup>1/2(∂Ω) be the Neumann trace operator in (8.2.12) and let <sup>τ</sup><sup>D</sup> : dom <sup>T</sup>max <sup>→</sup> <sup>H</sup>−1/2(∂Ω) be the extension of the Dirichlet trace operator in Theorem 8.3.9.

**Theorem 8.4.1.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup><sup>2</sup>-domain, let <sup>A</sup><sup>D</sup> be the self-adjoint Dirichlet realization of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup><sup>2</sup>(Ω) in Theorem 8.3.4, fix <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(AD) <sup>∩</sup> <sup>R</sup>, and decompose f ∈ dom Tmax according to (8.4.1) in the form f = f<sup>D</sup> + fη, where <sup>f</sup><sup>D</sup> <sup>∈</sup> dom <sup>A</sup><sup>D</sup> and <sup>f</sup><sup>η</sup> <sup>∈</sup> <sup>N</sup>η(Tmax). Then {L<sup>2</sup>(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0 f = \iota\_- \tilde{\tau}\_{\mathcal{D}} f \quad \text{and} \quad \Gamma\_1 f = -\iota\_+ \tau\_{\mathcal{N}} f\_{\mathcal{D}}, \qquad f = f\_{\mathcal{D}} + f\_{\eta} \in \text{dom}\, T\_{\text{max}},
$$

is a boundary triplet for (Tmin)<sup>∗</sup> = Tmax such that

$$A\_0 = A\_\text{D} \qquad \text{and} \qquad A\_1 = T\_{\text{min}} \hat{+} \dot{\mathfrak{N}}\_\eta(T\_{\text{max}}).\tag{8.4.2}$$

Proof. Let f,g ∈ dom Tmax and decompose f and g in the form f = f<sup>D</sup> + f<sup>η</sup> and <sup>g</sup> <sup>=</sup> <sup>g</sup><sup>D</sup> <sup>+</sup> <sup>g</sup><sup>η</sup> with <sup>f</sup>D, g<sup>D</sup> <sup>∈</sup> dom <sup>A</sup><sup>D</sup> <sup>⊂</sup> <sup>H</sup>2(Ω) and <sup>f</sup>η, g<sup>η</sup> <sup>∈</sup> <sup>N</sup>η(Tmax). Since <sup>A</sup><sup>D</sup> is self-adjoint,

$$(T\_{\max} \, f\_{\mathcal{D}}, g\_{\mathcal{D}})\_{L^2(\Omega)} = (A\_{\mathcal{D}} f\_{\mathcal{D}}, g\_{\mathcal{D}})\_{L^2(\Omega)} = (f\_{\mathcal{D}}, A\_{\mathcal{D}} g\_{\mathcal{D}})\_{L^2(\Omega)} = (f\_{\mathcal{D}}, T\_{\max} g\_{\mathcal{D}})\_{L^2(\Omega)}$$

and since η is real, one also has

$$(T\_{\max}f\_{\eta},g\_{\eta})\_{L^{2}(\Omega)} = (\eta f\_{\eta},g\_{\eta})\_{L^{2}(\Omega)} = (f\_{\eta},\eta g\_{\eta})\_{L^{2}(\Omega)} = (f\_{\eta},T\_{\max}g\_{\eta})\_{L^{2}(\Omega)}.$$

Therefore, one obtains

$$\begin{split} & \left( T\_{\text{max}} \, f, g \right)\_{L^{2}(\Omega)} - \left( f, T\_{\text{max}} \, g \right)\_{L^{2}(\Omega)} \\ &= \left( T\_{\text{max}} (f\_{\text{D}} + f\_{\eta}), g\_{\text{D}} + g\_{\eta} \right)\_{L^{2}(\Omega)} - \left( f\_{\text{D}} + f\_{\eta}, T\_{\text{max}} (g\_{\text{D}} + g\_{\eta}) \right)\_{L^{2}(\Omega)} \\ &= \left( T\_{\text{max}} \, f\_{\eta}, g\_{\text{D}} \right)\_{L^{2}(\Omega)} + \left( T\_{\text{max}} \, f\_{\text{D}}, g\_{\eta} \right)\_{L^{2}(\Omega)} \\ &\qquad - \left( f\_{\eta}, T\_{\text{max}} \, g\_{\text{D}} \right)\_{L^{2}(\Omega)} - \left( f\_{\text{D}}, T\_{\text{max}} \, g\_{\eta} \right)\_{L^{2}(\Omega)}. \end{split}$$

Let <sup>τ</sup><sup>N</sup> be the extension of the Neumann trace to dom <sup>T</sup>max from Theorem 8.3.10. Then it follows together with Corollary 8.3.11 and τDf<sup>D</sup> = τDg<sup>D</sup> = 0 that

$$\begin{aligned} & \langle T\_{\text{max}} f\_{\eta}, g\_{\text{D}} \rangle\_{L^{2}(\Omega)} - \langle f\_{\eta}, T\_{\text{max}} g\_{\text{D}} \rangle\_{L^{2}(\Omega)} \\ &= \langle \widetilde{\tau}\_{\text{D}} f\_{\eta}, \tau\_{\text{N}} g\_{\text{D}} \rangle\_{H^{-1/2}(\partial \Omega) \times H^{1/2}(\partial \Omega)} - \langle \widetilde{\tau}\_{\text{N}} f\_{\eta}, \tau\_{\text{D}} g\_{\text{D}} \rangle\_{H^{-3/2}(\partial \Omega) \times H^{3/2}(\partial \Omega)} \\ &= \langle \widetilde{\tau}\_{\text{D}} f\_{\eta}, \tau\_{\text{N}} g\_{\text{D}} \rangle\_{H^{-1/2}(\partial \Omega) \times H^{1/2}(\partial \Omega)} \end{aligned}$$

and

$$\begin{split} & \langle T\_{\text{max}} f\_{\text{D}}, g\_{\eta} \rangle\_{L^{2}(\Omega)} - \langle f\_{\text{D}}, T\_{\text{max}} g\_{\eta} \rangle\_{L^{2}(\Omega)} \\ &= \langle \tau\_{\text{D}} f\_{\text{D}}, \widetilde{\tau}\_{\text{N}} g\_{\eta} \rangle\_{H^{3/2}(\partial \Omega) \times H^{-3/2}(\partial \Omega)} - \langle \tau\_{\text{N}} f\_{\text{D}}, \widetilde{\tau}\_{\text{D}} g\_{\eta} \rangle\_{H^{1/2}(\partial \Omega) \times H^{-1/2}(\partial \Omega)} \\ &= - \langle \tau\_{\text{N}} f\_{\text{D}}, \widetilde{\tau}\_{\text{D}} g\_{\eta} \rangle\_{H^{1/2}(\partial \Omega) \times H^{-1/2}(\partial \Omega)}. \end{split}$$

Hence,

$$\begin{split} & (T\_{\text{max}}f,g)\_{L^{2}(\Omega)} - (f,T\_{\text{max}}g)\_{L^{2}(\Omega)} \\ &= \langle \widetilde{\tau}\_{\text{D}}f\_{\eta}, \tau\_{\text{N}}g\_{\text{D}} \rangle\_{H^{-1/2}(\partial\Omega)\times H^{1/2}(\partial\Omega)} - \langle \tau\_{\text{N}}f\_{\text{D}}, \widetilde{\tau}\_{\text{D}}g\_{\eta} \rangle\_{H^{1/2}(\partial\Omega)\times H^{-1/2}(\partial\Omega)} \\ &= \left(\iota\_{-}\widetilde{\tau}\_{\text{D}}f\_{\eta}, \iota\_{+}\tau\_{\text{N}}g\_{\text{D}}\right)\_{L^{2}(\partial\Omega)} - \left(\iota\_{+}\tau\_{\text{N}}f\_{\text{D}}, \iota\_{-}\widetilde{\tau}\_{\text{D}}g\_{\eta}\right)\_{L^{2}(\partial\Omega)} \end{split}$$

and, since <sup>f</sup>D, g<sup>D</sup> <sup>∈</sup> ker <sup>τ</sup><sup>D</sup> = ker <sup>τ</sup><sup>D</sup> according to Theorem 8.3.9, one sees that

$$\begin{aligned} \left(T\_{\text{max}}f,g\right)\_{L^{2}\left(\Omega\right)} - \left(f,T\_{\text{max}}g\right)\_{L^{2}\left(\Omega\right)} \\ = \left(\iota\_{-}\widetilde{\tau}\_{\text{D}}f,\iota\_{+}\tau\_{\text{N}}g\mathbf{D}\right)\_{L^{2}\left(\partial\Omega\right)} - \left(\iota\_{+}\tau\_{\text{N}}f\_{\text{D}},\iota\_{-}\widetilde{\tau}\_{\text{D}}g\right)\_{L^{2}\left(\partial\Omega\right)} \\ = \left(-\iota\_{+}\tau\_{\text{N}}f\_{\text{D}},\iota\_{-}\widetilde{\tau}\_{\text{D}}g\right)\_{L^{2}\left(\partial\Omega\right)} - \left(\iota\_{-}\widetilde{\tau}\_{\text{D}}f,-\iota\_{+}\tau\_{\text{N}}g\_{\text{D}}\right)\_{L^{2}\left(\partial\Omega\right)} \\ = \left(\Gamma\_{1}f,\Gamma\_{0}g\right)\_{L^{2}\left(\partial\Omega\right)} - \left(\Gamma\_{0}f,\Gamma\_{1}g\right)\_{L^{2}\left(\partial\Omega\right)} \end{aligned}$$

for all f,g ∈ dom Tmax, that is, the abstract Green identity is satisfied. To verify the surjectivity of the mapping

$$
\begin{pmatrix} \Gamma\_0\\ \Gamma\_1 \end{pmatrix} : \operatorname{dom} T\_{\text{max}} \to L^2(\partial \Omega) \times L^2(\partial \Omega), \tag{8.4.3}
$$

let ϕ, ψ <sup>∈</sup> <sup>L</sup>2(∂Ω) and consider <sup>ι</sup> −1 <sup>−</sup> <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>−1/2(∂Ω) and <sup>−</sup><sup>ι</sup> −1 <sup>+</sup> <sup>ψ</sup> <sup>∈</sup> <sup>H</sup>1/2(∂Ω). Observe that by (8.2.12) the Neumann trace operator τ<sup>N</sup> is a surjective mapping from {<sup>h</sup> <sup>∈</sup> <sup>H</sup>2(Ω) : <sup>τ</sup>D<sup>h</sup> = 0} onto <sup>H</sup>1/2(∂Ω), that is, <sup>τ</sup><sup>N</sup> : dom <sup>A</sup><sup>D</sup> <sup>→</sup> <sup>H</sup>1/2(∂Ω) is onto, and hence there exists f<sup>D</sup> ∈ dom A<sup>D</sup> such that τNf<sup>D</sup> = −ι −1 <sup>+</sup> ψ. Next recall from Theorem 8.3.9 that the extended Dirichlet trace operator <sup>τ</sup><sup>D</sup> maps dom <sup>T</sup>max onto <sup>H</sup>−1/2(∂Ω) and that ker <sup>τ</sup><sup>D</sup> = ker <sup>τ</sup><sup>D</sup> = dom <sup>A</sup>D. Hence, it follows from the direct sum decomposition dom Tmax = dom A<sup>D</sup> + Nη(Tmax) that the restriction <sup>τ</sup><sup>D</sup> : <sup>N</sup>η(Tmax) <sup>→</sup> <sup>H</sup>−1/2(∂Ω) is bijective, in particular, there exists <sup>f</sup><sup>η</sup> <sup>∈</sup> <sup>N</sup>η(Tmax) such that <sup>τ</sup>Df<sup>η</sup> <sup>=</sup> <sup>ι</sup> −1 <sup>−</sup> <sup>ϕ</sup>. Now it follows that <sup>f</sup> := <sup>f</sup><sup>D</sup> <sup>+</sup> <sup>f</sup><sup>η</sup> <sup>∈</sup> dom <sup>T</sup>max satisfies

$$
\Gamma\_0 f = \iota\_- \tilde{\tau}\_{\mathcal{D}} f = \iota\_- \tilde{\tau}\_{\mathcal{D}} f\_\eta = \iota\_- \iota\_-^{-1} \varphi = \varphi.
$$

and

$$
\Gamma\_1 f = -\iota\_+ \tau\_\mathcal{N} f\_\mathcal{D} = \iota\_+ \iota\_+^{-1} \psi = \psi,
$$

and hence the mapping in (8.4.3) is onto. Thus, {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} is a boundary triplet for (Tmin)<sup>∗</sup> = Tmax, as claimed.

From the definition of Γ<sup>0</sup> and ker <sup>τ</sup><sup>D</sup> = ker <sup>τ</sup><sup>D</sup> = dom <sup>A</sup><sup>D</sup> it is clear that dom A<sup>D</sup> = ker Γ0, and hence the self-adjoint extension corresponding to Γ<sup>0</sup> coincides with the Dirichlet operator AD, that is, the first identity in (8.4.2) holds. It remains to check the second identity in (8.4.2). For this let f = f<sup>D</sup> + f<sup>η</sup> ∈ ker Γ1, which means τNf<sup>D</sup> = 0. Thus, f<sup>D</sup> ∈ dom Tmin by (8.2.15) and it follows that <sup>A</sup><sup>1</sup> <sup>⊂</sup> <sup>T</sup>min <sup>+</sup> <sup>N</sup> <sup>η</sup>(Tmax). The inclusion <sup>T</sup>min <sup>+</sup> <sup>N</sup> <sup>η</sup>(Tmax) <sup>⊂</sup> <sup>A</sup><sup>1</sup> is clear from the definition of Γ1. This leads to the second identity in (8.4.2). -

**Remark 8.4.2.** The boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} in Theorem 8.4.1 is closely related to the boundary triplet {Nη(Tmax), Γ- <sup>0</sup>, Γ- <sup>1</sup>} in Corollary 5.5.12, where

$$
\Gamma\_0'f = f\_\eta \quad \text{and} \quad \Gamma\_1'f = P\_{\mathfrak{N}\_\eta(T\_{\text{max}})}(A\_\mathcal{D} - \eta)f\_\mathcal{D}, \quad f = f\_\mathcal{D} + f\_\eta \in \text{dom}\,T\_{\text{max}}.
$$

In fact, one has ker Γ<sup>0</sup> = ker Γ- <sup>0</sup> and ker Γ<sup>1</sup> = ker Γ- <sup>1</sup>, and hence

$$
\begin{pmatrix} \Gamma'\_0 \\ \Gamma'\_1 \end{pmatrix} = \begin{pmatrix} W\_{11} & 0 \\ 0 & W\_{22} \end{pmatrix} \begin{pmatrix} \Gamma\_0 \\ \Gamma\_1 \end{pmatrix}.
$$

with some 2 <sup>×</sup> 2 operator matrix <sup>W</sup> = (Wij )<sup>2</sup> i,j=1 as in Theorem 2.5.1, see also Corollary 2.5.5. In the present situation it follows from Theorem 8.3.9 and (8.4.1) that the restriction <sup>ι</sup>−τ<sup>D</sup> : <sup>N</sup>η(Tmax) <sup>→</sup> <sup>L</sup><sup>2</sup>(∂Ω) is bijective and one concludes <sup>W</sup><sup>11</sup> = (ι−τ<sup>D</sup>)−<sup>1</sup>. Now the properties of <sup>W</sup> imply that <sup>W</sup><sup>22</sup> = (ι−τ<sup>D</sup>)∗.

With the help of the extended Dirichlet-to-Neumann map in Corollary 8.3.12 one obtains a more explicit description of the domain of the self-adjoint operator A<sup>1</sup> in (8.4.2).

**Proposition 8.4.3.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain, let <sup>A</sup><sup>D</sup> be the self-adjoint Dirichlet realization of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω), and fix <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(AD) <sup>∩</sup> <sup>R</sup>. Moreover, let <sup>D</sup>(η) be the extended Dirichlet-to-Neumann map in Corollary 8.3.12. Then the self-adjoint extension A<sup>1</sup> of Tmin in (8.4.2) is defined on

$$\operatorname{dom} A\_1 = \left\{ f \in \operatorname{dom} T\_{\max} : \tilde{\tau}\_{\mathcal{N}} f = D(\eta) \tilde{\tau}\_{\mathcal{D}} f \right\}. \tag{8.4.4}$$

In the case that η<m(AD), where m(AD) denotes the lower bound of AD, the operator A<sup>1</sup> coincides with the Kre˘ın type extension SK,η of Tmin in Definition 5.4.2. In particular, if m(AD) > 0 and η = 0, then A<sup>1</sup> = SK,<sup>0</sup> is the Kre˘ın–von Neumann extension of Tmin.

Proof. It is clear from Theorem 8.4.1 that

$$\operatorname{dom} A\_1 = \ker \Gamma\_1 = \left\{ f = f\_\mathcal{D} + f\_\eta \in \operatorname{dom} T\_{\max} : \tau\_\mathcal{N} f\_\mathcal{D} = 0 \right\}.$$

Let <sup>τ</sup><sup>N</sup> be the extension of the Neumann trace <sup>τ</sup><sup>N</sup> to the maximal domain in Theorem 8.3.10. Then the boundary condition τNf<sup>D</sup> = 0 can be rewritten as <sup>τ</sup>N<sup>f</sup> <sup>=</sup> <sup>τ</sup>Nfη, where <sup>f</sup> <sup>=</sup> <sup>f</sup><sup>D</sup> <sup>+</sup> <sup>f</sup><sup>η</sup> <sup>∈</sup> dom <sup>T</sup>max. With the help of the extended Dirichlet-to-Neumann map

$$\dot{D}(\eta): H^{-1/2}(\partial\Omega) \to H^{-3/2}(\partial\Omega), \quad \tilde{\tau}\mathfrak{p}f\_{\eta} \mapsto \tilde{\tau}\_{\mathbb{N}}f\_{\eta}, \qquad f\_{\eta} \in \mathfrak{N}\_{\eta}(T\_{\text{max}}),$$

one obtains <sup>τ</sup>Nf<sup>η</sup> <sup>=</sup> <sup>D</sup>(η)τDf<sup>η</sup> <sup>=</sup> <sup>D</sup>(η)τDf, which implies (8.4.4).

If <sup>η</sup> <sup>∈</sup> <sup>R</sup> is chosen smaller than the lower bound <sup>m</sup>(AD) of <sup>A</sup>D, then it follows from the second identity in (8.4.2), Lemma 5.4.1, and Definition 5.4.2 that the Kre˘ın type extension <sup>S</sup>K,η <sup>=</sup> <sup>T</sup>min <sup>+</sup> <sup>N</sup> <sup>η</sup>(Tmax) of <sup>T</sup>min and <sup>A</sup><sup>1</sup> coincide. In the special case m(AD) > 0 and η = 0 one has A<sup>1</sup> = SK,0, which is the Kre˘ın–von Neumann extension of Tmin; cf. Definition 5.4.2. -

In the next proposition the γ-field and the Weyl function corresponding to the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} in Theorem 8.4.1 are provided. Note that for f = f<sup>D</sup> + f<sup>η</sup> decomposed as in (8.4.1) one has

$$
\Gamma\_0 f = \iota - \widetilde{\tau} \mathbf{\hat{p}} f = \iota - \widetilde{\tau} \mathbf{\hat{p}} f\_{\eta},
$$

as ker <sup>τ</sup><sup>D</sup> = ker <sup>τ</sup><sup>D</sup> = dom <sup>A</sup><sup>D</sup> by Theorem 8.3.9. It is also clear from (8.4.1) that Γ<sup>0</sup> is a bijective mapping from Nη(Tmax) onto L2(∂Ω).

**Proposition 8.4.4.** Let {L<sup>2</sup>(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for (Tmin)<sup>∗</sup> <sup>=</sup> <sup>T</sup>max in Theorem 8.4.1 and let fη(ϕ) be the unique element in Nη(Tmax) such that Γ0fη(ϕ) = ϕ. Then for all λ ∈ ρ(AD) the γ-field corresponding to the boundary triplet {L<sup>2</sup>(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} is given by

$$\gamma(\lambda)\varphi = \left(I + (\lambda - \eta)(A\_{\rm D} - \lambda)^{-1}\right)f\_{\eta}(\varphi), \quad \varphi \in L^2(\partial\Omega), \tag{8.4.5}$$

and fλ(ϕ) := γ(λ)ϕ is the unique element in Nλ(Tmax) such that Γ0fλ(ϕ) = ϕ. Furthermore, one has

$$
\gamma(\lambda)^\* = -\iota\_+ \tau\_\mathcal{N} (A\_\mathcal{D} - \overline{\lambda})^{-1}, \quad \lambda \in \rho(A\_\mathcal{D}).\tag{8.4.6}
$$

The Weyl function <sup>M</sup> corresponding to the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} is given by

$$M(\lambda)\varphi = (\eta - \lambda)\iota\_+\tau\_\mathcal{N}(A\_\mathcal{D} - \lambda)^{-1}f\_\eta(\varphi), \quad \varphi \in L^2(\partial\Omega).$$

In particular, <sup>γ</sup>(η)<sup>ϕ</sup> <sup>=</sup> <sup>f</sup>η(ϕ) and <sup>M</sup>(η)<sup>ϕ</sup> = 0 for all <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup>2(∂Ω).

Proof. Since by definition γ(η) is the inverse of the restriction of Γ<sup>0</sup> to Nη(Tmax), it is clear that γ(η)ϕ = fη(ϕ), where fη(ϕ) is the unique element in Nη(Tmax) such that Γ0fη(ϕ) = ϕ. Both (8.4.5) and (8.4.6) are consequences of Proposition 2.3.2. In order to compute the Weyl function note that

$$\gamma(\lambda)\varphi = f\_{\eta}(\varphi) + (\lambda - \eta)(A\_{\mathcal{D}} - \lambda)^{-1}f\_{\eta}(\varphi).$$

is decomposed in (<sup>λ</sup> <sup>−</sup> <sup>η</sup>)(A<sup>D</sup> <sup>−</sup> <sup>λ</sup>)−1fη(ϕ) <sup>∈</sup> dom <sup>A</sup><sup>D</sup> and <sup>f</sup>η(ϕ) <sup>∈</sup> <sup>N</sup>η(Tmax), and hence by the definition of Γ<sup>1</sup> it follows that

$$\begin{split} M(\lambda)\varphi &= \Gamma\_1\gamma(\lambda)\varphi = -\iota\_+\tau\_\mathcal{N}\left[ (\lambda-\eta)(A\_\mathcal{D}-\lambda)^{-1}f\_\eta(\varphi) \right] \\ &= (\eta-\lambda)\iota\_+\tau\_\mathcal{N}(A\_\mathcal{D}-\lambda)^{-1}f\_\eta(\varphi). \end{split}$$

The assertion <sup>M</sup>(η)<sup>ϕ</sup> = 0 for all <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup>2(∂Ω) is clear from the above. -

The Weyl function M in Proposition 8.4.4 is closely connected with the Dirichlet-to-Neumann map <sup>D</sup>(λ) and its extension <sup>D</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AD), in Definition 8.3.6 and Corollary 8.3.12. This connection will be made explicit in the next lemma. First, consider f = f<sup>D</sup> + f<sup>η</sup> ∈ dom Tmax as in (8.4.1). In the present situation one has

$$
\tau\_\mathcal{N} f\_\mathcal{D} = \widetilde{\tau}\_\mathcal{N} f\_\mathcal{D} = \widetilde{\tau}\_\mathcal{N} f - \widetilde{\tau}\_\mathcal{N} f\_\eta.
$$

Hence, making use of <sup>D</sup>(η)τDf<sup>η</sup> <sup>=</sup> <sup>τ</sup>Nf<sup>η</sup> (see Corollary 8.3.12) and the identity ker <sup>τ</sup><sup>D</sup> = dom <sup>A</sup>D, it follows that

$$
\tau\_{\rm N} f\_{\rm D} = \widetilde{\tau}\_{\rm N} f - \dot{D}(\eta) \widetilde{\tau}\_{\rm D} f\_{\eta} = \widetilde{\tau}\_{\rm N} f - \dot{D}(\eta) \widetilde{\tau}\_{\rm D} f. \tag{8.4.7}
$$

**Lemma 8.4.5.** Let M be the Weyl function corresponding to the boundary triplet in Theorem 8.4.1 and let <sup>D</sup>(λ), <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AD), be the extended Dirichlet-to-Neumann map in Corollary 8.3.12. Then the regularization property

$$\text{ran}\left(\dot{D}(\eta) - \dot{D}(\lambda)\right) \subset H^{1/2}(\partial\Omega) \tag{8.4.8}$$

holds and one has

$$M(\lambda)\varphi = \iota\_+ \left(\tilde{D}(\eta) - \tilde{D}(\lambda)\right) \iota\_-^{-1} \varphi, \qquad \varphi \in L^2(\partial \Omega), \tag{8.4.9}$$

and

$$M(\lambda)\varphi = \iota\_+ \left( D(\eta) - D(\lambda) \right) \iota\_-^{-1} \varphi, \qquad \varphi \in H^2(\partial \Omega), \tag{8.4.10}$$

Proof. For <sup>ψ</sup> <sup>∈</sup> <sup>H</sup>−1/2(∂Ω) choose <sup>f</sup><sup>λ</sup> <sup>∈</sup> <sup>N</sup>λ(Tmax) such that <sup>τ</sup>Df<sup>λ</sup> <sup>=</sup> <sup>ψ</sup> or, equivalently, Γ0f<sup>λ</sup> <sup>=</sup> <sup>ι</sup>−ψ. Decompose <sup>f</sup><sup>λ</sup> in the form <sup>f</sup><sup>λ</sup> <sup>=</sup> <sup>f</sup><sup>λ</sup> <sup>D</sup> + fλ,η with f<sup>λ</sup> <sup>D</sup> ∈ dom A<sup>D</sup> and fλ,η ∈ Nη(Tmax). Then one computes

$$\left(\tilde{D}(\eta) - \tilde{D}(\lambda)\right)\psi = \tilde{D}(\eta)\tilde{\tau}\_{\text{D}}f\_{\lambda} - \tilde{\tau}\_{\text{N}}f\_{\lambda} = -\tau\_{\text{N}}f\_{\text{D}}^{\lambda},\tag{8.4.11}$$

where (8.4.7) was used in the last step for f = fλ. Since f<sup>λ</sup> <sup>D</sup> <sup>∈</sup> dom <sup>A</sup><sup>D</sup> <sup>⊂</sup> <sup>H</sup>2(Ω), the regularization property (8.4.8) follows from (8.2.12). From (8.4.11) one also concludes that

$$
\iota\_+ \left( \tilde{D}(\eta) - \tilde{D}(\lambda) \right) \iota\_-^{-1} \Gamma\_0 f\_\lambda = -\iota\_+ \tau\_\mathcal{N} f\_\mathcal{D}^\lambda = \Gamma\_1 f\_\lambda,
$$

and since M(λ)Γ0f<sup>λ</sup> = Γ1f<sup>λ</sup> by the definition of the Weyl function, this shows (8.4.9).

It remains to prove the second assertion (8.4.10). For this note that the restriction of ι −1 <sup>−</sup> : <sup>L</sup>2(∂Ω) <sup>→</sup> <sup>H</sup>−1/2(∂Ω) to <sup>H</sup>2(∂Ω) is an isometric isomorphism from H2(∂Ω) onto H3/2(∂Ω) by Corollary 8.2.2. Furthermore, it follows from the definition that the extended Dirichlet-to-Neumann map <sup>D</sup>(λ) coincides with the Dirichlet-to-Neumann map D(λ) on H3/2(∂Ω). With these observations it is clear that (8.4.10) follows when restricting (8.4.9) to H2(∂Ω). -

**Remark 8.4.6.** The boundary mappings in Theorem 8.4.1 and the corresponding <sup>γ</sup>-field and Weyl function depend on the choice of <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(AD) <sup>∩</sup> <sup>R</sup> and the decomposition of <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max as <sup>f</sup> <sup>=</sup> <sup>f</sup> <sup>η</sup> <sup>D</sup> <sup>+</sup> <sup>f</sup>η; observe that also <sup>f</sup><sup>D</sup> <sup>=</sup> <sup>f</sup> <sup>η</sup> <sup>D</sup> ∈ dom A<sup>D</sup> depends on η. Suppose now that the boundary mappings are defined with respect to some other η- <sup>∈</sup> <sup>ρ</sup>(AD) <sup>∩</sup> <sup>R</sup> and decompose <sup>f</sup> accordingly as <sup>f</sup> <sup>=</sup> <sup>f</sup> <sup>η</sup>- <sup>D</sup> + fη- . If Γη <sup>0</sup>, <sup>Γ</sup><sup>η</sup> <sup>1</sup> denote the boundary mappings in Theorem 8.4.1 with respect to η, and Γη- <sup>0</sup> , <sup>Γ</sup>η- <sup>1</sup> denote the boundary mappings in Theorem 8.4.1 with respect to η- , then one has

$$
\begin{pmatrix} \Gamma\_0^{\eta'} \\ \Gamma\_1^{\eta} \end{pmatrix} = \begin{pmatrix} I & 0 \\ -M(\eta') & I \end{pmatrix} \begin{pmatrix} \Gamma\_0^{\eta} \\ \Gamma\_1^{\eta} \end{pmatrix}. \tag{8.4.12}
$$

In fact, that Γ<sup>η</sup>- <sup>0</sup> <sup>f</sup> = Γ<sup>η</sup> <sup>0</sup>f for f ∈ dom Tmax is clear from Theorem 8.4.1, and for the remaining identity in (8.4.12) it follows from Lemma 8.4.5 that

$$\begin{split} -M(\eta')\Gamma\_{0}^{\eta}f + \Gamma\_{1}^{\eta}f &= \iota\_{+}\left(\tilde{D}(\eta') - \tilde{D}(\eta)\right)\iota\_{-}^{-1}\Gamma\_{0}^{\eta}f + \Gamma\_{1}^{\eta}f \\ &= \iota\_{+}\left(\tilde{D}(\eta')\tilde{\tau}\_{\mathcal{D}}f - \tilde{D}(\eta)\tilde{\tau}\_{\mathcal{D}}f\right) - \iota\_{+}\tau\_{\mathcal{N}}f\_{\mathcal{D}}^{\eta} \\ &= \iota\_{+}\left(\tilde{D}(\eta')\tilde{\tau}\_{\mathcal{D}}f\_{\eta'} - \tilde{D}(\eta)\tilde{\tau}\_{\mathcal{D}}f\_{\eta}\right) - \iota\_{+}\tau\_{\mathcal{N}}f\_{\mathcal{D}}^{\eta} \\ &= \iota\_{+}\left(\tilde{\tau}\_{\mathcal{N}}f\_{\eta'} - \tilde{\tau}\_{\mathcal{N}}f\_{\eta}\right) - \iota\_{+}\tau\_{\mathcal{N}}f\_{\mathcal{D}}^{\eta} \\ &= \iota\_{+}\tau\_{\mathcal{N}}(f\_{\mathcal{D}}^{\eta} - f\_{\mathcal{D}}^{\eta'}) - \iota\_{+}\tau\_{\mathcal{N}}f\_{\mathcal{D}}^{\eta} \\ &= \Gamma\_{1}^{\eta'}f. \end{split}$$

Finally, note that the γ-fields and Weyl functions of the boundary triplets in Theorem 8.4.1 for different η and ηtransform accordingly; cf. Proposition 2.5.3.

Next some classes of extensions of Tmin and their spectral properties are briefly discussed. Let {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Theorem 8.4.1 with corresponding γ-field γ and Weyl function M in Proposition 8.4.4. According to Corollary 2.1.4, the self-adjoint (maximal dissipative, maximal accumulative) extensions A<sup>Θ</sup> ⊂ Tmax of Tmin are in a one-to-one correspondence to the selfadjoint (maximal dissipative, maximal accumulative) relations Θ in L2(∂Ω) via

$$\begin{split} \text{dom}\,A\_{\Theta} &= \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, \{ \Gamma\_0 f, \Gamma\_1 f \} \in \Theta \right\} \\ &= \left\{ f \in \text{dom}\,T\_{\text{max}} \, : \, \{ \iota\_- \tilde{\tau}\_{\text{D}} f, -\iota\_+ \tau\_{\text{N}} f\_{\text{D}} \} \in \Theta \right\}. \end{split} \tag{8.4.13}$$

If Θ is an operator in L2(∂Ω), then the domain of A<sup>Θ</sup> is given by

$$\operatorname{dom} A\_{\Theta} = \left\{ f \in \operatorname{dom} T\_{\max} \, : \, \Theta \iota - \widetilde{\tau} \!\_{\mathcal{D}} f = -\iota + \tau \chi f \mathcal{D} \right\}. \tag{8.4.14}$$

Let Θ be a self-adjoint relation in L2(∂Ω) and let A<sup>Θ</sup> be the corresponding self-adjoint realization of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω). By Corollary 1.10.9, Θ can be represented in terms of bounded operators <sup>A</sup>, <sup>B</sup> <sup>∈</sup> **<sup>B</sup>**(L2(∂Ω)) satisfying the conditions A∗B = B∗A, AB<sup>∗</sup> = BA∗, and A∗A + B∗B = I = AA<sup>∗</sup> + BB<sup>∗</sup> such that

$$\Theta = \left\{ \{ \mathcal{A}\varphi, \mathcal{B}\varphi \} : \varphi \in L^2(\partial \Omega) \right\} = \left\{ \{ \psi, \psi' \} : \mathcal{A}^\*\psi' = \mathcal{B}^\*\psi \right\}.$$

In this case one has

$$\operatorname{dom} A\_{\Theta} = \left\{ f \in \operatorname{dom} T\_{\max} \, : \, -\mathcal{A}^\* \iota\_+ \tau\_\mathcal{N} f\_\mathcal{D} = \mathcal{B}^\* \iota\_- \widetilde{\tau}\_\mathcal{D} f \right\},$$

and for λ ∈ ρ(AΘ) ∩ ρ(AD) the Kre˘ın formula for the corresponding resolvents

$$\begin{split} \left( (A\_{\Theta} - \lambda)^{-1} = (A\_{\mathcal{D}} - \lambda)^{-1} + \gamma(\lambda) \big( \Theta - M(\lambda) \big)^{-1} \gamma(\overline{\lambda})^\* \\ = \left( A\_{\mathcal{D}} - \lambda \right)^{-1} + \gamma(\lambda) \mathcal{A} \big( \mathcal{B} - M(\lambda) \mathcal{A} \big)^{-1} \gamma(\overline{\lambda})^\* \end{split} \tag{8.4.15}$$

holds by Theorem 2.6.1 and Corollary 2.6.3. Recall that in the present situation the spectrum of A<sup>D</sup> = A<sup>0</sup> is discrete by Proposition 8.3.2. According to Theorem 2.6.2, λ ∈ ρ(AD) is an eigenvalue of A<sup>Θ</sup> if and only if ker (Θ − M(λ)) or, equivalently, ker (B − M(λ)A) is nontrivial, and that

$$\ker\left(A\_{\Theta} - \lambda\right) = \gamma(\lambda)\ker\left(\Theta - M(\lambda)\right) = \gamma(\lambda)\mathcal{A}\ker\left(\mathcal{B} - M(\lambda)\mathcal{A}\right).$$

Although Ω is a bounded C<sup>2</sup>-domain, it will turn out in Example 8.4.9 that the spectrum of A<sup>Θ</sup> is in general not discrete, and thus continuous spectrum may be present. It then follows from Theorem 2.6.2 and Theorem 2.6.5 that λ ∈ ρ(AD) belongs to the continuous spectrum σc(AΘ) (essential spectrum σess(AΘ) or discrete spectrum σd(AΘ)) of A<sup>Θ</sup> if and only if 0 belongs to σc(Θ−M(λ)) (σess(Θ−M(λ)) or σd(Θ − M(λ))).

For a complete description of the spectrum of A<sup>Θ</sup> recall that the symmetric operator Tmin is simple according to Proposition 8.3.13 and make use of a transform of the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} as in Chapter 3.8. This reasoning implies that λ is an eigenvalue of A<sup>Θ</sup> if and only if λ is a pole of the function

$$
\lambda \mapsto M\_{\Theta}(\lambda) = \left(\mathcal{A}^\* + \mathcal{B}^\* M(\lambda)\right) \left(\mathcal{B}^\* - \mathcal{A}^\* M(\lambda)\right)^{-1}.
$$

It is important to note in this context that the multiplicity of the eigenvalues of A<sup>Θ</sup> is not necessarily finite and that the dimension of the eigenspace ker (A<sup>Θ</sup> − λ) of an isolated eigenvalue λ of A<sup>Θ</sup> coincides with the dimension of the range of the residue of M<sup>Θ</sup> at λ. Furthermore, the continuous and absolutely continuous spectrum of A<sup>Θ</sup> can be characterized as in Section 3.8, e.g., one has

$$\sigma\_{\mathrm{ac}}(A\_{\Theta}) = \bigcup\_{\varphi \in L^2(\partial \Omega)} \operatorname{clos}\_{\mathrm{ac}} \left( \{ x \in \mathbb{R} : 0 < \mathrm{Im} \left( M\_{\Theta}(x + i0)\varphi, \varphi \right)\_{L^2(\partial \Omega)} < \infty \} \right).$$

In the special case that the self-adjoint relation Θ in L2(∂Ω) is a bounded operator the boundary condition reads as in (8.4.14) and according to Section 3.8 the spectral properties of the self-adjoint operator A<sup>Θ</sup> can also be described with the help of the function

$$
\lambda \mapsto \left(\Theta - M(\lambda)\right)^{-1}.
$$

The general boundary conditions in (8.4.13) and (8.4.14) contain also typical classes of boundary conditions that are treated in spectral problems for partial differential operators, as, e.g., Neumann or Robin type boundary conditions. In the following example the standard Neumann boundary conditions are discussed. Note that the Neumann operator does not coincide with the Kre˘ın type extension SK,η or the Kre˘ın–von Neumann extension SK,<sup>0</sup> of Tmin in Proposition 8.4.3.

**Example 8.4.7.** Let {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Theorem 8.4.1 and choose <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(AD) <sup>∩</sup> <sup>R</sup> in (8.4.1) in such a way that also <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(AN), where A<sup>N</sup> denotes the Neumann realization of −Δ + V in Proposition 8.3.3 and Theorem 8.3.4. Since both self-adjoint operators A<sup>D</sup> and A<sup>N</sup> are semibounded from below (or both have discrete spectrum), such an η exists. In this situation it follows that the Dirichlet-to-Neumann map

$$D(\eta) : H^{3/2}(\partial \Omega) \to H^{1/2}(\partial \Omega)$$

in Definition 8.3.6 is a bijective mapping. Furthermore, <sup>ι</sup><sup>+</sup> : <sup>H</sup><sup>1</sup>/<sup>2</sup>(∂Ω) <sup>→</sup> <sup>L</sup><sup>2</sup>(∂Ω) is bijective and the restriction of ι −1 <sup>−</sup> : <sup>L</sup>2(∂Ω) <sup>→</sup> <sup>H</sup>−1/2(∂Ω) to <sup>H</sup>2(∂Ω) is an isometric isomorphism from H2(∂Ω) onto H3/2(∂Ω) according to Corollary 8.2.2. Hence, it is clear that

$$\Theta\_{\rm N} := \iota\_+ D(\eta) \iota\_-^{-1}, \qquad \text{dom}\, \Theta\_{\rm N} := H^2(\partial \Omega), \tag{8.4.16}$$

is a densely defined bijective operator in <sup>L</sup>2(∂Ω). Furthermore, for <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>2(∂Ω) and ψ = ι −1 <sup>−</sup> <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>3/2(∂Ω) it follows from Corollary 8.2.2 that

$$\left(\iota\_+ D(\eta)\iota\_-^{-1}\varphi,\varphi\right)\_{L^2(\partial\Omega)} = \left(\iota\_+ D(\eta)\psi,\iota\_-\psi\right)\_{L^2(\partial\Omega)} = (D(\eta)\psi,\psi)\_{L^2(\partial\Omega)}.\tag{8.4.17}$$

Now choose <sup>f</sup><sup>η</sup> <sup>∈</sup> <sup>H</sup>2(Ω) such that (−Δ + <sup>V</sup> )f<sup>η</sup> <sup>=</sup> ηf<sup>η</sup> and <sup>τ</sup>Df<sup>η</sup> <sup>=</sup> <sup>ψ</sup>, which is possible by (8.3.9) and (8.2.12). Then it follows from Definition 8.3.6 and the first Green identity in (8.2.18) that

$$\begin{split} \left( D(\eta)\psi, \psi \right)\_{L^{2}(\partial\Omega)} &= \left( D(\eta)\tau\_{\mathbb{D}}f\_{\eta}, \tau\_{\mathbb{D}}f\_{\eta} \right)\_{L^{2}(\partial\Omega)} \\ &= \left( \tau\_{\mathbb{N}}f\_{\eta}, \tau\_{\mathbb{D}}f\_{\eta} \right)\_{L^{2}(\partial\Omega)} \\ &= \| \nabla f\_{\eta} \|\_{L^{2}(\Omega; \mathbb{C}^{n})}^{2} + (\Delta f\_{\eta}, f\_{\eta})\_{L^{2}(\Omega)} \\ &= \| \nabla f\_{\eta} \|\_{L^{2}(\Omega; \mathbb{C}^{n})}^{2} + ((V-\eta)f\_{\eta}, f\_{\eta})\_{L^{2}(\Omega)}, \end{split} \tag{8.4.18}$$

so that (D(η)ψ, ψ)L2(∂Ω) <sup>∈</sup> <sup>R</sup> and hence (ΘNϕ, ϕ)L2(∂Ω) <sup>∈</sup> <sup>R</sup> by (8.4.16)–(8.4.17) for all <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>2(∂Ω). It follows that the bijective operator Θ<sup>N</sup> is symmetric in L2(∂Ω), and hence Θ<sup>N</sup> is an unbounded self-adjoint operator in L2(∂Ω) such that 0 ∈ ρ(ΘN).

The self-adjoint realization of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω) corresponding to the selfadjoint operator Θ<sup>N</sup> in (8.4.16) is denoted by AΘ<sup>N</sup> . A function f ∈ dom Tmax belongs to dom AΘ<sup>N</sup> if and only if

$$
\Gamma\_0 f = \iota\_- \tilde{\tau}\_{\mathcal{D}} f \in \text{dom}\,\Theta\_{\mathcal{N}} \quad \text{and} \quad \Gamma\_1 f = \Theta\_{\mathcal{N}} \Gamma\_0 f.
$$

Note that <sup>ι</sup>−τD<sup>f</sup> <sup>∈</sup> dom Θ<sup>N</sup> forces <sup>τ</sup>D<sup>f</sup> <sup>∈</sup> <sup>H</sup>3/2(∂Ω) and hence <sup>f</sup> <sup>∈</sup> <sup>H</sup>2(Ω) and <sup>τ</sup>D<sup>f</sup> <sup>=</sup> <sup>τ</sup>D<sup>f</sup> by (8.2.12) and Theorem 8.3.9. It then follows from (8.4.7) and (8.4.16) that the boundary condition Γ1f = ΘNΓ0f takes on the form

$$
\mu\_+ D(\eta) \tau\_\mathcal{D} f - \iota\_+ \tau\_\mathcal{N} f = -\iota\_+ \tau\_\mathcal{N} f\_\mathcal{D} = \Gamma\_1 f = \Theta\_\mathcal{N} \Gamma\_0 f = \iota\_+ D(\eta) \tau\_\mathcal{D} f,
$$

that is, <sup>τ</sup>N<sup>f</sup> = 0. Hence, it has been shown that dom <sup>A</sup><sup>Θ</sup><sup>N</sup> <sup>⊂</sup> <sup>H</sup><sup>2</sup>(Ω) and that τNf = 0 for all f ∈ dom A<sup>Θ</sup><sup>N</sup> . Therefore, A<sup>Θ</sup><sup>N</sup> ⊂ A<sup>N</sup> and since both operators are self-adjoint one concludes that A<sup>Θ</sup><sup>N</sup> = AN.

Note also that by (8.4.16) and Lemma 8.4.5 one has

$$\left(\Theta\_{\mathcal{N}} - M(\lambda)\right)\varphi = \iota\_+ D(\eta)\iota\_-^{-1}\varphi - \iota\_+ \left(D(\eta) - D(\lambda)\right)\iota\_-^{-1}\varphi = \iota\_+ D(\lambda)\iota\_-^{-1}\varphi$$

for <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup>(∂Ω) and <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AD). Hence, it follows that Θ<sup>N</sup> <sup>−</sup> <sup>M</sup>(λ) is a bijective operator in <sup>L</sup><sup>2</sup>(∂Ω) for all <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(AD) <sup>∩</sup> <sup>ρ</sup>(AN) which is defined on <sup>H</sup><sup>2</sup>(∂Ω). Therefore, (8.4.15) implies that the resolvents of A<sup>D</sup> and A<sup>N</sup> are related via

$$(A\_N - \lambda)^{-1} = (A\_D - \lambda)^{-1} + \gamma(\lambda)\iota\_- D(\lambda)^{-1} \iota\_+^{-1} \gamma(\overline{\lambda})^\*,$$

where <sup>γ</sup> is the <sup>γ</sup>-field corresponding to the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} in Proposition 8.4.4.

The next example is a generalization of the previous example from Neumann to local and nonlocal Robin boundary conditions.

**Example 8.4.8.** Let {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be as in the previous example and fix some <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(AD) <sup>∩</sup> <sup>ρ</sup>(AN) <sup>∩</sup> <sup>R</sup>. Then the operator Θ<sup>N</sup> <sup>=</sup> <sup>ι</sup>+D(η)<sup>ι</sup> −1 <sup>−</sup> in (8.4.16) is an unbounded self-adjoint operator in <sup>L</sup>2(∂Ω) with domain <sup>H</sup>2(∂Ω), and 0 <sup>∈</sup> <sup>ρ</sup>(ΘN). Assume that

$$B: H^{3/2}(\partial\Omega) \to H^{1/2}(\partial\Omega) \tag{8.4.19}$$

is compact as an operator from H3/2(∂Ω) into H1/2(∂Ω) and that B is symmetric in <sup>L</sup>2(∂Ω), that is, (Bψ, ψ)L2(∂Ω) <sup>∈</sup> <sup>R</sup> for all <sup>ψ</sup> <sup>∈</sup> dom <sup>B</sup> <sup>=</sup> <sup>H</sup>3/2(∂Ω). Then it follows that

$$\iota\_+ B \iota\_-^{-1} : H^2(\partial \Omega) \to L^2(\partial \Omega),$$

is compact as an operator from H2(∂Ω) into L2(∂Ω) and as in (8.4.17) one sees that ι+Bι−<sup>1</sup> <sup>−</sup> is symmetric in <sup>L</sup>2(∂Ω). Consider the operator

$$\Theta\_B := \iota\_+ \left( D(\eta) - B \right) \iota\_-^{-1} = \Theta\_\mathcal{N} - \iota\_+ B \iota\_-^{-1}, \quad \text{dom}\, \Theta\_B = H^2(\partial \Omega), \tag{8.4.20}$$

and observe that the symmetric operator ι+Bι−<sup>1</sup> <sup>−</sup> is a relative compact perturbation of the self-adjoint operator Θ<sup>N</sup> in L2(∂Ω), that is, the operator

$$\iota\_+ B \iota\_-^{-1} \Theta\_N^{-1}$$

is compact in L2(∂Ω). It is well known from standard perturbation results (see, e.g., [652, Corollary 2 of Theorem XIII.14]) that in this case the perturbed operator Θ<sup>B</sup> is self-adjoint in L2(∂Ω).

The self-adjoint realization of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω) corresponding to the selfadjoint operator Θ<sup>B</sup> in (8.4.20) is denoted by AΘ<sup>B</sup> . It is clear that a function f ∈ dom Tmax belongs to dom AΘ<sup>B</sup> if and only if

$$
\Gamma\_0 f = \iota\_- \widetilde{\tau}\_{\mathcal{D}} f \in \text{dom}\,\Theta\_B \quad \text{and} \quad \Gamma\_1 f = \Theta\_B \Gamma\_0 f. \tag{8.4.21}
$$

In the same way as in Example 8.4.7 the fact that <sup>ι</sup>−τ<sup>D</sup><sup>f</sup> <sup>∈</sup> dom Θ<sup>B</sup> implies that <sup>f</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup>(Ω) and <sup>τ</sup><sup>D</sup><sup>f</sup> <sup>=</sup> <sup>τ</sup>Df, and the boundary condition Γ1<sup>f</sup> = ΘBΓ0<sup>f</sup> takes the explicit form

$$
\iota\_+ D(\eta) \tau\_\mathcal{D} f - \iota\_+ \tau\_\mathcal{N} f = \Gamma\_1 f = \Theta\_B \Gamma\_0 f = \iota\_+ \left( D(\eta) - B \right) \tau\_\mathcal{D} f,
$$

that is, <sup>τ</sup>N<sup>f</sup> <sup>=</sup> BτDf. Conversely, if <sup>f</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup>(Ω) is such that <sup>τ</sup>N<sup>f</sup> <sup>=</sup> BτDf, then f satisfies (8.4.21) and hence f ∈ dom A<sup>Θ</sup><sup>B</sup> . Thus, it has been shown that the self-adjoint operator AΘ<sup>B</sup> is defined on

$$\text{dom}\,A\_{\Theta\_B} = \left\{ f \in H^2(\Omega) : \tau\_{\mathbb{N}}f = B\tau\_{\mathbb{D}}f \right\}.$$

In the same way as in the previous example one obtains

$$
\Theta\_B - M(\lambda) = \iota\_+ \left( D(\lambda) - B \right) \iota\_-^{-1}
$$

and hence for all λ ∈ ρ(AD) ∩ ρ(AΘ<sup>B</sup> ) one has

$$(A\_{\Theta\_B} - \lambda)^{-1} = (A\_D - \lambda)^{-1} + \gamma(\lambda)\iota\_- \left(D(\lambda) - B\right)^{-1} \iota\_+^{-1} \gamma(\overline{\lambda})^\*.$$

Finally, note that a sufficient condition for the operator B in (8.4.19) to be compact is that <sup>B</sup> : <sup>H</sup>3/2(∂Ω) <sup>→</sup> <sup>H</sup>1/2+ε(∂Ω) is bounded for some ε > 0, or that B : H3/2−ε- (∂Ω) <sup>→</sup> <sup>H</sup>1/2(∂Ω) is bounded for some <sup>ε</sup>- > 0, since the embeddings <sup>H</sup>1/2+ε(∂Ω) <sup>→</sup> <sup>H</sup>1/2(∂Ω) and <sup>H</sup>3/2(∂Ω) <sup>→</sup> <sup>H</sup>3/2−ε- (∂Ω) are compact by (8.2.8).

In the next example it is shown that the (essential) spectrum of a self-adjoint realization A<sup>Θ</sup> of −Δ + V can be very general, depending on the properties of the parameter Θ. In particular, the self-adjoint realization A<sup>Θ</sup> may not be semibounded.

**Example 8.4.9.** Let <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(AD)∩R, consider an arbitrary self-adjoint operator Ξ in the Hilbert space Nη(Tmax) = ker (Tmax−η), and assume that η ∈ ρ(Ξ). Denote by PN<sup>η</sup> the orthogonal projection in L2(Ω) onto Nη(Tmax) and let ιN<sup>η</sup> be the natural embedding of Nη(Tmax) into L2(Ω).

Let {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet in Theorem 8.4.1 with corresponding γ-field and Weyl function in Proposition 8.4.4. Note that M(η) = 0 and that both

$$P\_{\mathfrak{N}\_{\eta}}\gamma(\eta) : L^2(\partial\Omega) \to \mathfrak{N}\_{\eta}(T\_{\text{max}}) \quad \text{and} \quad \gamma(\eta)^\* \iota\_{\mathfrak{N}\_{\eta}} : \mathfrak{N}\_{\eta}(T\_{\text{max}}) \to L^2(\partial\Omega)$$

are isomorphisms. It follows that

$$\Theta := \left( \gamma(\eta)^\* \iota\_{\mathfrak{N}\_\eta} \right) (\Xi - \eta) \left( P\_{\mathfrak{N}\_\eta} \gamma(\eta) \right),$$

is a self-adjoint operator in <sup>L</sup>2(∂Ω) with 0 <sup>∈</sup> <sup>ρ</sup>(Θ) and

$$\Theta^{-1} = \left(P\_{\mathfrak{N}\_{\eta}}\gamma(\eta)\right)^{-1}(\Xi-\eta)^{-1}\left(\gamma(\eta)^{\*}\iota\_{\mathfrak{N}\_{\eta}}\right)^{-1}.\tag{8.4.22}$$

Let A<sup>Θ</sup> be the corresponding self-adjoint realization of −Δ+V in (8.4.13)–(8.4.14) defined on

$$\operatorname{dom} A\_{\Theta} = \left\{ f \in \operatorname{dom} T\_{\max} \, : \, \Theta \iota\_- \tilde{\tau}\_{\mathcal{D}} f = -\iota\_+ \tau\_{\mathcal{N}} f\_{\mathcal{D}} \right\}.$$

Since <sup>M</sup>(η) = 0 and <sup>η</sup> <sup>∈</sup> <sup>R</sup>, Kre˘ın's formula in (8.4.15) takes the form

$$\begin{aligned} \left( \left( A \ominus - \eta \right)^{-1} = \left( A\_{\rm D} - \eta \right)^{-1} + \gamma(\eta) \Theta^{-1} \gamma(\eta)^{\*} \\ = \left( A\_{\rm D} - \lambda \right)^{-1} + \begin{pmatrix} P\mathfrak{n}\_{\eta} \gamma(\eta) \Theta^{-1} \gamma(\eta)^{\*} \iota \mathfrak{n}\_{\eta} & 0 \\ 0 & 0 \end{pmatrix}, \end{aligned}$$

where the block operator matrix is acting with respect to the space decomposition <sup>L</sup>2(Ω) = <sup>N</sup>η(Tmax) <sup>⊕</sup> (Nη(Tmax))⊥. Using (8.4.22) one then concludes that

$$(A\_{\Theta} - \eta)^{-1} = (A\_{\mathcal{D}} - \eta)^{-1} + \begin{pmatrix} (\Xi - \eta)^{-1} & 0\\ 0 & 0 \end{pmatrix}.$$

In particular, since (A<sup>D</sup> <sup>−</sup> <sup>η</sup>)−<sup>1</sup> is compact, well-known perturbation results show that

$$
\sigma\_{\mathrm{ess}}\left( (A\_{\Theta} - \eta)^{-1} \right) = \sigma\_{\mathrm{ess}}\left( (\Xi - \eta)^{-1} \right) \cup \{ 0 \},
$$

and hence σess(AΘ) = σess(Ξ).

## **8.5 Semibounded Schr¨odinger operators**

The semibounded self-adjoint realizations of −Δ + V , where V ∈ L∞(Ω) is real, and the corresponding densely defined closed semibounded forms in L2(Ω) are described in this section. For this purpose it is convenient to construct a boundary pair which is compatible with the boundary triplet in Theorem 8.4.1 and to apply the general results from Section 5.6. Under the additional assumption that V ≥ 0, the nonnegative realizations of −Δ + V and the corresponding nonnegative forms in L2(Ω) are discussed as a special case. In this situation the Kre˘ın–von Neumann extension appears as the smallest nonnegative extension.

Let Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain and let <sup>A</sup><sup>D</sup> be the self-adjoint Dirichlet realization of −Δ + V . It is clear from Proposition 8.3.2 that A<sup>D</sup> coincides with the Friedrichs extension of the minimal operator Tmin in (8.3.2) and that A<sup>D</sup> is bounded from below with lower bound m(AD) > v−, where v<sup>−</sup> = essinf V . Furthermore, the resolvent of A<sup>D</sup> is compact since the domain Ω is bounded. Therefore, the following description of the semibounded self-adjoint extensions of Tmin is an immediate consequence of Proposition 5.5.6 and Proposition 5.5.8.

**Proposition 8.5.1.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain, let {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for (Tmin)<sup>∗</sup> = Tmax from Theorem 8.4.1, and let

$$\begin{aligned} A\_{\Theta} &= -\Delta + V, \\ \operatorname{dom} A\_{\Theta} &= \left\{ f \in \operatorname{dom} T\_{\max} \, : \, \{ \Gamma\_0 f, \Gamma\_1 f \} \in \Theta \right\}, \end{aligned}$$

be a self-adjoint extension of Tmin in L<sup>2</sup>(Ω) corresponding to a self-adjoint relation Θ in L<sup>2</sup>(∂Ω) as in (8.4.13). Then

$$A\_{\Theta} \text{ is } semibounded \quad \Leftrightarrow \quad \Theta \text{ is } semibounded.$$

Recall also from Section 8.3 that the densely defined closed semibounded form t<sup>A</sup><sup>D</sup> corresponding to A<sup>D</sup> is defined on H<sup>1</sup> <sup>0</sup> (Ω). Now fix some η<m(AD), use the direct sum decomposition

$$\operatorname{dom} T\_{\text{max}} = \mathfrak{N}\_{\eta}(T\_{\text{max}}) + \operatorname{dom} A\_{\text{D}} = \mathfrak{N}\_{\eta}(T\_{\text{max}}) + \left(H^2(\Omega) \cap H\_0^1(\Omega)\right) \tag{8.5.1}$$

from (8.4.1) and Proposition 8.3.2, and consider the corresponding boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} for (Tmin)<sup>∗</sup> <sup>=</sup> <sup>T</sup>max in Theorem 8.4.1 given by

$$
\Gamma\_0 f = \iota\_- \tilde{\tau}\_{\mathcal{D}} f \quad \text{and} \quad \Gamma\_1 f = -\iota\_+ \tau\_{\mathcal{N}} f\_{\mathcal{D}}, \tag{8.5.2}
$$

where f = f<sup>η</sup> + f<sup>D</sup> ∈ dom Tmax with f<sup>η</sup> ∈ Nη(Tmax) and f<sup>D</sup> ∈ dom AD; cf. (8.5.1). It is clear that A<sup>0</sup> = A<sup>D</sup> coincides with the Friedrichs extension of Tmin and <sup>A</sup><sup>1</sup> <sup>=</sup> <sup>T</sup>min <sup>+</sup> <sup>N</sup> <sup>η</sup>(Tmax) coincides with the Kre˘ın type extension <sup>S</sup>K,η of <sup>T</sup>min; cf. Definition 5.4.2. In order to define a boundary pair for Tmin corresponding to A<sup>1</sup> = SK,η, consider the densely defined closed semibounded form tSK,η associated with SK,η and recall from Corollary 5.4.16 the direct sum decomposition

$$\operatorname{dom}\mathfrak{t}\_{\mathcal{S}\_{\mathcal{K},\eta}} = \mathfrak{N}\_{\eta}(T\_{\max}) + \operatorname{dom}\mathfrak{t}\_{A\_{\mathcal{D}}} = \mathfrak{N}\_{\eta}(T\_{\max}) + H\_0^1(\Omega) \tag{8.5.3}$$

of dom tSK,η . Comparing (8.5.1) and (8.5.3) one sees that dom Tmax ⊂ dom tSK,η and that the domain of the Dirichlet operator A<sup>D</sup> in (8.5.1) is replaced by the corresponding form domain in (8.5.3). The functions f ∈ dom tSK,η will be written in the form <sup>f</sup> <sup>=</sup> <sup>f</sup><sup>η</sup> <sup>+</sup> <sup>f</sup>F, where <sup>f</sup><sup>η</sup> <sup>∈</sup> <sup>N</sup>η(Tmax) and <sup>f</sup><sup>F</sup> <sup>∈</sup> dom <sup>t</sup>A<sup>D</sup> <sup>=</sup> <sup>H</sup><sup>1</sup> <sup>0</sup> (Ω). Now define the mapping

$$
\Lambda : \operatorname{dom} \mathfrak{t}\_{\mathbb{S}\mathbb{K}, \eta} \to L^2(\partial \Omega), \quad f \mapsto \Lambda f = \iota\_- \widetilde{\tau}\_{\mathbb{D}} f\_{\eta}. \tag{8.5.4}
$$

It will be shown next that {L2(∂Ω),Λ} is a boundary pair that is compatible with the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} in the sense of Definition 5.6.4; although the main part of the proof of Lemma 8.5.2 is similar to Example 5.6.9, the details are provided.

**Lemma 8.5.2.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain and let <sup>A</sup><sup>D</sup> be the self-adjoint Dirichlet realization of −Δ + V with lower bound m(AD). Fix η<m(AD), let {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be the corresponding boundary triplet for (Tmin)<sup>∗</sup> <sup>=</sup> <sup>T</sup>max from Theorem 8.4.1, and let <sup>Λ</sup> be the mapping in (8.5.4). Then {L2(∂Ω),Λ} is a boundary pair for Tmin corresponding to the Kre˘ın type extension SK,η which is compatible with the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1}. Moreover, one has

$$(T\_{\text{max}}f,g)\_{L^2(\Omega)} = \mathfrak{t}\_{\text{SK},\eta}[f,g] + (\Gamma\_1 f, \Lambda g)\_{L^2(\partial\Omega)}\tag{8.5.5}$$

for all f ∈ dom Tmax and g ∈ dom tSK,η .

Proof. According to Lemma 5.6.5 (ii), it suffices to show that for some a<η the mapping Λ in (8.5.4) is bounded from the Hilbert space

$$\mathfrak{H}\_{\mathfrak{t}\_{\mathbb{K}\_{\mathbb{K},\eta}}-a} = \left(\text{dom}\,\mathfrak{t}\_{\mathbb{K},\eta}, (\cdot,\cdot)\_{\mathfrak{t}\_{\mathbb{K},\eta}-a}\right)$$

to L<sup>2</sup>(∂Ω) and that Λ extends the mapping Γ<sup>0</sup> in (8.5.2). In the present situation it is clear that the compatibility condition A<sup>1</sup> = SK,η is satisfied.

In order to show that Λ is bounded fix some a<η, recall first from (5.1.7) that the Hilbert space norm on HtSK,η <sup>−</sup><sup>a</sup> is given by

$$\|f\|\_{\mathfrak{t}\_{\mathcal{S}\_{\mathbb{K},\eta}}-a}^2 = \mathfrak{t}\_{\mathcal{S}\mathfrak{K},\eta}[f] - a\|f\|\_{L^2(\Omega)}^2, \quad f \in \text{dom}\,\mathfrak{t}\_{\mathcal{S}\mathfrak{K},\eta} = \mathfrak{H}\_{\mathfrak{t}\_{\mathcal{S}\mathfrak{K},\eta}-a}.$$

It follows from Theorem 8.3.9 that the restriction <sup>ι</sup>−τ<sup>D</sup> : <sup>N</sup>η(Tmax) <sup>→</sup> <sup>L</sup>2(∂Ω) is bounded and hence for f = f<sup>η</sup> + f<sup>F</sup> ∈ dom tSK,η , decomposed according to (8.5.3) in f<sup>η</sup> ∈ Nη(Tmax) and f<sup>F</sup> ∈ dom tA<sup>D</sup> , one has the estimate

$$\|\Lambda f\|\_{L^{2}(\partial\Omega)}^{2} = \|\iota\_{-}\widetilde{\tau}\_{\mathcal{D}}f\_{\eta}\|\_{L^{2}(\partial\Omega)}^{2} \le C\|f\_{\eta}\|\_{L^{2}(\Omega)}^{2}.\tag{8.5.6}$$

Now the orthogonal sum decomposition

$$\operatorname{dom}\mathfrak{t}\_{\mathbb{S}\_{\mathbb{K},\eta}} = \mathfrak{N}\_{a}(T\_{\max}) \oplus\_{\mathfrak{t}\_{\mathbb{S}\_{\mathbb{K},\eta}}-a} \operatorname{dom}\mathfrak{t}\_{A\_{\mathrm{D}}} \tag{8.5.7}$$

from Corollary 5.4.15 will be used. To this end, define

$$f\_a := \left(I + (a - \eta)(A\_\mathcal{D} - a)^{-1}\right) f\_\eta$$

and note that f = f<sup>a</sup> + h<sup>F</sup> with f<sup>a</sup> ∈ Na(Tmax) and h<sup>F</sup> = f<sup>η</sup> − f<sup>a</sup> + f<sup>F</sup> ∈ dom tA<sup>D</sup> . Then one has

$$f\_{\eta} = \left(I + (\eta - a)(A\_{\mathcal{D}} - \eta)^{-1}\right)f\_a.$$

and Proposition 1.4.6 leads to the estimate

$$\|f\_{\eta}\|\_{L^{2}(\Omega)} \le \frac{m(A\_{\mathcal{D}}) - a}{m(A\_{\mathcal{D}}) - \eta} \|f\_{a}\|\_{L^{2}(\Omega)}.\tag{8.5.8}$$

Furthermore, it follows from (5.1.9) and the orthogonal sum decomposition (8.5.7) that

$$\|\left(\eta - a\right)\| f\_a \|\_{L^2(\Omega)}^2 \le \|f\_a\|\_{\mathfrak{t}\_{\mathcal{S}\_{\mathcal{K},\eta}}-a}^2 \le \|f\_a\|\_{\mathfrak{t}\_{\mathcal{S}\_{\mathcal{K},\eta}}-a}^2 + \|h\_{\mathcal{F}}\|\_{\mathfrak{t}\_{\mathcal{S}\_{\mathcal{K},\eta}}-a}^2 = \|f\|\_{\mathfrak{t}\_{\mathcal{S}\_{\mathcal{K},\eta}}-a}^2.$$

From this estimate, (8.5.6), and (8.5.8) one concludes that Λ : <sup>H</sup>tSK,η <sup>−</sup><sup>a</sup> <sup>→</sup> <sup>L</sup>2(∂Ω) is bounded.

From the definition of Λ in (8.5.4) and the decompositions (8.5.1) and (8.5.3) it is clear that Λ is an extension of the mapping Γ<sup>0</sup> in (8.5.2). Moreover, by construction, the condition A<sup>1</sup> = SK,η is satisfied. Therefore, Lemma 5.6.5 (ii) shows that {L2(∂Ω),Λ} is a boundary pair for <sup>T</sup>min corresponding to <sup>S</sup>K,η which is compatible with the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1}. The identity (8.5.5) follows from Corollary 5.6.7. - The next theorem is a variant of Theorem 5.6.13 in the present situation.

**Theorem 8.5.3.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup><sup>2</sup>-domain, let <sup>A</sup><sup>D</sup> be the self-adjoint Dirichlet realization of −Δ + V with lower bound m(AD), and fix η<m(AD). Let {L<sup>2</sup>(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for (Tmin)<sup>∗</sup> <sup>=</sup> <sup>T</sup>max from Theorem 8.4.1 and let {L<sup>2</sup>(∂Ω),Λ} be the compatible boundary pair in Lemma 8.5.2. Furthermore, let Θ be a semibounded self-adjoint relation in L<sup>2</sup>(∂Ω) and let A<sup>Θ</sup> be the corresponding semibounded self-adjoint extension of Tmin in Proposition 8.5.1. Then the closed semibounded form ω<sup>Θ</sup> in L<sup>2</sup>(∂Ω) corresponding to Θ and the densely defined closed semibounded form tA<sup>Θ</sup> corresponding to A<sup>Θ</sup> are related by

$$\begin{aligned} \mathfrak{t}\_{A\_{\Theta}}[f,g] &= \mathfrak{t}\_{\mathbb{S}\_{\mathbb{K},\eta}}[f,g] + \omega\_{\Theta}[\Lambda f, \Lambda g], \\ \operatorname{dom} \mathfrak{t}\_{A\_{\Theta}} &= \left\{ f \in \operatorname{dom} \mathfrak{t}\_{\mathbb{S}\_{\mathbb{K},\eta}} : \Lambda f \in \operatorname{dom} \omega\_{\Theta} \right\}. \end{aligned} \tag{8.5.9}$$

For completeness, the form tA<sup>Θ</sup> in Theorem 8.5.3 will be made more explicit using Corollary 5.6.14. First note that, by the definition of the boundary map Λ in (8.5.4) and the decomposition (8.5.3), one can rewrite (8.5.9) as

$$\begin{split} \mathsf{t}\_{A\_{\Theta}}[f,g] &= \mathsf{t}\_{\mathsf{K},\boldsymbol{\eta}}[f,g] + \omega\_{\Theta}[\iota - \widetilde{\mathsf{r}} \operatorname{\mathsf{D}} f\_{\boldsymbol{\eta}}, \iota - \widetilde{\mathsf{r}} \operatorname{\mathsf{D}} g\_{\boldsymbol{\eta}}], \\ \mathsf{dom} \mathsf{t}\_{A\_{\Theta}} &= \left\{ f = f\_{\boldsymbol{\eta}} + f\_{\mathcal{F}} \in \operatorname{dom} \mathsf{t}\_{\mathsf{K},\boldsymbol{\eta}} : \iota - \widetilde{\mathsf{r}} \operatorname{\mathsf{D}} f\_{\boldsymbol{\eta}} \in \operatorname{dom} \omega\_{\Theta} \right\}. \end{split} \tag{8.5.10}$$

If m(Θ) denotes the lower bound of the semibounded self-adjoint relation Θ and μ ≤ m(Θ) is fixed, then the closed semibounded form tA<sup>Θ</sup> in (8.5.9)–(8.5.10) corresponding to A<sup>Θ</sup> is given by

$$\begin{split} \mathsf{t}\_{A\_{\Theta}}[f,g] &= \mathsf{t}\_{\mathsf{S}\_{\mathsf{K},\eta}}[f,g] + \left( (\mathsf{\Theta}\_{\mathrm{op}} - \mu)^{\frac{1}{2}} \iota - \widetilde{\mathsf{T}} \mathrm{D}f\_{\eta}, (\mathsf{\Theta}\_{\mathrm{op}} - \mu)^{\frac{1}{2}} \iota - \widetilde{\mathsf{T}} \mathrm{D}g\_{\eta} \right)\_{L^{2}(\partial\Omega)} \\ &\quad + \mu \left( \iota - \widetilde{\mathsf{T}} \mathrm{D}f\_{\eta}, \iota - \widetilde{\mathsf{T}} \mathrm{D}g\_{\eta} \right)\_{L^{2}(\partial\Omega)}, \\ \mathsf{dom}\,\mathsf{t}\_{A\_{\Theta}} &= \left\{ f = f\_{\eta} + f\_{\mathrm{F}} \in \mathrm{dom}\,\mathsf{t}\_{\mathsf{S}\_{\mathsf{K},\eta}} : \iota - \widetilde{\mathsf{T}} \mathrm{D}f\_{\eta} \in \mathrm{dom}\,(\Theta\_{\mathrm{op}} - \mu)^{\frac{1}{2}} \right\}; \end{split}$$

as usual, here Θop denotes the semibounded self-adjoint operator part of Θ acting in <sup>L</sup>2(∂Ω)op <sup>=</sup> dom Θ. In the special case where Θop <sup>∈</sup> **<sup>B</sup>**(L2(∂Ω)op) one has

$$\begin{aligned} \mathsf{t}\_{A\circ}[f,g] &= \mathsf{t}\_{S\mathsf{K},\eta}[f,g] + \left(\Theta\_{\mathrm{op}}\,\iota - \widetilde{\tau}\_{\mathrm{D}}f\_{\eta}, \iota - \widetilde{\tau}\_{\mathrm{D}}g\_{\eta}\right)\_{L^{2}(\partial\Omega)},\\ \mathsf{dom}\,\mathsf{t}\_{A\circ} &= \left\{ f = f\_{\eta} + f\_{\mathrm{F}} \in \mathrm{dom}\,\mathsf{t}\_{S\_{\mathsf{K},\eta}} : \iota\_{-}\widetilde{\tau}\_{\mathrm{D}}f\_{\eta} \in \mathrm{dom}\,\Theta\_{\mathrm{op}} \right\}, \end{aligned}$$

and if Θ <sup>∈</sup> **<sup>B</sup>**(L2(∂Ω)), then

$$\operatorname{tr}\_{A\circ\flat}[f,g] = \mathfrak{t}\_{\operatorname{S\!{}}\_{\operatorname{K},\eta}}[f,g] + \left(\Theta\iota - \widetilde{\tau}\_{\operatorname{D}}f\_{\eta}, \iota\_{-}\widetilde{\tau}\_{\operatorname{D}}g\_{\eta}\right)\_{L^{2}(\partial\Omega)}, \qquad \operatorname{dom}\mathfrak{t}\_{A\circ\flat} = \operatorname{dom}\mathfrak{t}\_{\operatorname{S\!{}}\_{\operatorname{K},\eta}}.$$

Recall also from Corollary 5.4.15 that the form tSK,η can be expressed in terms of the form tA<sup>D</sup> and the resolvent of AD.

Finally, the special case V ≥ 0 will be briefly considered. In this situation the minimal operator Tmin and the Dirichlet operator A<sup>D</sup> are both uniformly positive and hence in the above construction of a boundary triplet and corresponding boundary pair one may choose η = 0. More precisely, Theorem 8.4.1 has the following form.

**Corollary 8.5.4.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup><sup>2</sup>-domain, let <sup>A</sup><sup>D</sup> be the self-adjoint Dirichlet realization of <sup>−</sup>Δ+<sup>V</sup> in <sup>L</sup><sup>2</sup>(Ω) with <sup>V</sup> <sup>≥</sup> <sup>0</sup>, and decompose <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max according to (8.4.1) with η = 0 in the form f = f<sup>D</sup> + f0, where f<sup>D</sup> ∈ dom A<sup>D</sup> and <sup>f</sup><sup>0</sup> <sup>∈</sup> ker <sup>T</sup>max. Then {L<sup>2</sup>(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0 f = \iota\_- \widetilde{\tau}\_{\mathcal{D}} f \quad \text{and} \quad \Gamma\_1 f = -\iota\_+ \tau\_{\mathcal{N}} f\_{\mathcal{D}}, \qquad f = f\_{\mathcal{D}} + f\_0 \in \text{dom}\, T\_{\text{max}},
$$

is a boundary triplet for (Tmin)<sup>∗</sup> = Tmax such that

$$A\_0 = A\_\mathcal{D} \qquad \text{and} \qquad A\_1 = T\_{\text{min}} \widehat{+} \dot{\mathfrak{M}}\_0(T\_{\text{max}})$$

coincide with the Friedrichs extension and the Kre˘ın–von Neumann extension of Tmin, respectively.

It is clear from Proposition 8.4.4 that for all λ ∈ ρ(AD) the γ-field and Weyl function corresponding to the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} in Corollary 8.5.4 have the form

$$
\gamma(\lambda)\varphi = \left(I + \lambda(A\_D - \lambda)^{-1}\right)f\_0(\varphi), \quad \varphi \in L^2(\partial\Omega),
$$

and

$$M(\lambda)\varphi = -\iota\_{\tau}\tau\_{\mathcal{N}}\lambda(A\_{\mathcal{D}}-\lambda)^{-1}f\_0(\varphi), \quad \varphi \in L^2(\partial\Omega),\tag{8.5.11}$$

respectively, where f0(ϕ) is the unique element in N0(Tmax) with the property that Γ0f0(ϕ) = <sup>ι</sup>−τDf0(ϕ) = <sup>ϕ</sup>.

The next proposition is a variant of Proposition 8.5.1 for nonnegative extensions.

**Proposition 8.5.5.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain, assume that <sup>V</sup> <sup>≥</sup> <sup>0</sup>, and let {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for (Tmin)<sup>∗</sup> <sup>=</sup> <sup>T</sup>max from Corollary 8.5.4. Let

$$\begin{aligned} A\_{\Theta} &= -\Delta + V, \\ \text{dom}\, A\_{\Theta} &= \left\{ f \in \text{dom}\, T\_{\text{max}} \, : \left\{ \Gamma\_0 f, \Gamma\_1 f \right\} \in \Theta \right\}, \end{aligned}$$

be a self-adjoint extension of Tmin in L2(Ω) corresponding to a self-adjoint relation Θ in L2(∂Ω) as in (8.4.13). Then

$$A\_{\Theta} \text{ is } nnonnegative \quad \Leftrightarrow \quad \Theta \text{ is } nonnegative.$$

Proof. Note that the Weyl function M in (8.5.11) satisfies M(0) = 0 and that Tmin is uniformly positive. Therefore, if A<sup>Θ</sup> is a nonnegative self-adjoint extension of Tmin, then Proposition 5.5.6 with x = 0 shows that the self-adjoint relation Θ in L2(∂Ω) is nonnegative. Conversely, if Θ is a nonnegative self-adjoint relation in <sup>L</sup>2(∂Ω), then it follows from Corollary 5.5.15 and <sup>A</sup><sup>1</sup> <sup>=</sup> <sup>S</sup>K,<sup>0</sup> <sup>≥</sup> 0 that <sup>A</sup><sup>Θ</sup> is a nonnegative self-adjoint extension of Tmin. - In the nonnegative case the boundary mapping Λ in (8.5.4) is given by

$$
\Lambda : \operatorname{dom} \mathfrak{t}\_{\mathbb{S}\_{\mathbb{K},0}} \to L^2(\partial \Omega), \quad f \mapsto \Lambda f = \iota\_- \widetilde{\tau}\_{\mathbb{D}} f\_0,\tag{8.5.12}
$$

where one has the direct sum decomposition

$$
\dim \mathbf{t}\_{S\_{K,0}} = \mathfrak{N}\_0(T\_{\max}) + \dim \mathbf{t}\_{A\_D} = \mathfrak{N}\_0(T\_{\max}) + H\_0^1(\Omega),
$$

and according to Lemma 8.5.2 {L<sup>2</sup>(∂Ω),Λ} is a boundary pair that is compatible with the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} in Corollary 8.5.4.

In the nonnegative case a description of the nonnegative extensions and their form domains is of special interest. In the present situation Corollary 5.6.18 reads as follows.

**Corollary 8.5.6.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain, let <sup>A</sup><sup>D</sup> be the self-adjoint Dirichlet realization of <sup>−</sup>Δ + <sup>V</sup> with <sup>V</sup> <sup>≥</sup> <sup>0</sup>, let {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} be the boundary triplet for (Tmin)<sup>∗</sup> <sup>=</sup> <sup>T</sup>max from Corollary 8.5.4, and let {L2(∂Ω),Λ} be the compatible boundary pair in (8.5.12). Then the formula

$$\begin{aligned} \mathfrak{t}\_{A\bullet}[f,g] &= \mathfrak{t}\_{\mathcal{S}\mathfrak{k},0}[f,g] + \left(\Theta^{\frac{1}{2}}\_{\mathrm{op}}\iota - \widetilde{\tau}\_{\mathrm{D}}f\_{0}, \Theta^{\frac{1}{2}}\_{\mathrm{op}}\iota - \widetilde{\tau}\_{\mathrm{D}}g\_{0}\right)\_{L^{2}(\partial\Omega)}, \\ \mathrm{dom}\,\mathfrak{t}\_{A\bullet} &= \left\{ f = f\_{0} + f\_{\mathrm{F}} \in \mathrm{dom}\,\mathfrak{t}\_{\mathrm{S\bar{K}},0} : \iota - \widetilde{\tau}\_{\mathrm{D}}f\_{0} \in \mathrm{dom}\,\Theta^{\frac{1}{2}}\_{\mathrm{op}} \right\}, \end{aligned}$$

establishes a one-to-one correspondence between all closed nonnegative forms tA<sup>Θ</sup> corresponding to nonnegative self-adjoint extension A<sup>Θ</sup> of Tmin in L2(Ω) and all closed nonnegative forms ω<sup>Θ</sup> corresponding to nonnegative self-adjoint relations Θ in L2(∂Ω).

## **8.6 Coupling of Schr¨odinger operators**

The aim of this section is to interpret the natural self-adjoint Schr¨odinger operator

$$A = -\Delta + V, \qquad \text{dom}\, A = H^2(\mathbb{R}^n), \tag{8.6.1}$$

in <sup>L</sup>2(Rn) with a real potential <sup>V</sup> <sup>∈</sup> <sup>L</sup>∞(Rn) as a coupling of Schr¨odinger operators on a bounded C2-domain and its complement, that is, A is identified as a selfadjoint extension of the orthogonal sum of the minimal Schr¨odinger operators on the subdomains and its resolvent is expressed in a Kre˘ın type resolvent formula. The present treatment is a multidimensional variant of the discussion in Section 6.5 and is based on the abstract coupling construction in Section 4.6.

Let Ω<sup>+</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded <sup>C</sup>2-domain and let Ω<sup>−</sup> := <sup>R</sup><sup>n</sup> \ <sup>Ω</sup><sup>+</sup> be the corresponding exterior domain. Since <sup>C</sup> := <sup>∂</sup>Ω<sup>−</sup> <sup>=</sup> <sup>∂</sup>Ω<sup>+</sup> is <sup>C</sup>2-smooth in the sense of Definition 8.2.1, the term <sup>C</sup>2-domain will be used here for Ω−, although Ω<sup>−</sup> is unbounded. In the following the common boundary C is sometimes referred to as an interface, linking the two domains Ω<sup>+</sup> and Ω−. Note that one has the identification

$$L^2(\mathbb{R}^n) = L^2(\Omega\_+) \oplus L^2(\Omega\_-). \tag{8.6.2}$$

Consider the Schr¨odinger operator <sup>A</sup> <sup>=</sup> <sup>−</sup>Δ + <sup>V</sup> , dom <sup>A</sup> <sup>=</sup> <sup>H</sup><sup>2</sup>(R<sup>n</sup>), in (8.6.1) with <sup>V</sup> <sup>∈</sup> <sup>L</sup>∞(R<sup>n</sup>) real. Since the Laplacian <sup>−</sup>Δ defined on <sup>H</sup><sup>2</sup>(R<sup>n</sup>) is unitarily equivalent in L<sup>2</sup>(R<sup>n</sup>) via the Fourier transform to the maximal multiplication operator with the function x → |x| <sup>2</sup>, it is clear that <sup>−</sup>Δ, and hence <sup>A</sup> in (8.6.1), is self-adjoint in <sup>L</sup><sup>2</sup>(R<sup>n</sup>). Moreover, for <sup>f</sup> <sup>∈</sup> <sup>C</sup><sup>∞</sup> <sup>0</sup> (R<sup>n</sup>) integration by parts shows that

$$(Af, f)\_{L^2(\mathbb{R}^n)} = (\nabla f, \nabla f)\_{L^2(\mathbb{R}^n; \mathbb{C}^n)} + (Vf, f)\_{L^2(\mathbb{R}^n)} \ge \upsilon\_- \|f\|\_{L^2(\mathbb{R}^n)}^2,$$

where v<sup>−</sup> = essinf V . As C<sup>∞</sup> <sup>0</sup> (Rn) is dense in H2(Rn), this estimate extends to <sup>H</sup>2(Rn). Therefore, <sup>A</sup> is semibounded from below and <sup>v</sup><sup>−</sup> is a lower bound.

The restriction of the real function V ∈ L∞(Ω) to Ω<sup>±</sup> is denoted by V<sup>±</sup> and the same <sup>±</sup>-index notation will be used for the restriction <sup>f</sup><sup>±</sup> <sup>∈</sup> <sup>L</sup>2(Ω±) of an element <sup>f</sup> <sup>∈</sup> <sup>L</sup>2(Rn). The minimal and maximal operator associated with <sup>−</sup>Δ + <sup>V</sup><sup>+</sup> in <sup>L</sup>2(Ω+) will be denoted by <sup>T</sup> <sup>+</sup> min and T <sup>+</sup> max, respectively, and the selfadjoint Dirichlet realization in L2(Ω+) will be denoted by A<sup>+</sup> <sup>D</sup>; cf. Proposition 8.3.1, Proposition 8.3.2, and Theorem 8.3.4. For the minimal operator

$$T\_{\min}^{-} = -\Delta + V\_{-}, \qquad \text{dom}\, T\_{\min}^{-} = H\_0^2(\Omega\_{-}),$$

and the maximal operator

$$\begin{aligned} T\_{\text{max}}^- &= -\Delta + V\_-, \\ \text{dom}\, T\_{\text{max}}^- &= \left\{ f\_- \in L^2(\Omega\_-) : -\Delta f\_- + V\_- f\_- \in L^2(\Omega\_-) \right\}, \end{aligned}$$

on the unbounded C2-domain one can show in the same way as in the proof of Proposition 8.3.1 that (T <sup>−</sup> min)<sup>∗</sup> = T <sup>−</sup> max and T <sup>−</sup> min = (T <sup>−</sup> max)∗. Furthermore, since <sup>Ω</sup><sup>−</sup> has a compact <sup>C</sup>2-smooth boundary, it follows by analogy to Theorem 8.3.4 that the self-adjoint Dirichlet realization A<sup>−</sup> <sup>D</sup> corresponding to the densely defined closed semibounded form

$$\mathbf{t}\_{\mathrm{D}}^{-}[f\_{-},g\_{-}] = (\nabla f\_{-},\nabla g\_{-})\_{L^{2}(\Omega\_{-};\mathbb{C}^{n})} + (V\_{-}f\_{-},g\_{-})\_{L^{2}(\Omega\_{-})}, \quad \mathrm{dom}\,\mathbf{t}\_{\mathrm{D}}^{-} = H\_{0}^{1}(\Omega\_{-}),$$

via the first representation theorem (Theorem 5.1.18) is given by

$$A\_\mathcal{D}^- = -\Delta + V\_-, \quad \text{dom}\, A\_\mathcal{D}^- = \left\{ f\_- \in H^2(\Omega\_-) : \tau\_\mathcal{D}^- f\_- = 0 \right\},$$

where τ <sup>−</sup> <sup>D</sup> denotes the Dirichlet trace operator on Ω−; cf. (8.2.13). The operator A<sup>−</sup> D is semibounded from below and v<sup>−</sup> = essinf V is a lower bound. In contrast to the Dirichlet operator A<sup>+</sup> <sup>D</sup>, the resolvent of A<sup>−</sup> <sup>D</sup> is not compact since Rellich's theorem is not valid on the unbounded domain Ω−; cf. the proof of Proposition 8.3.2. Note also that the Dirichlet trace operator τ <sup>−</sup> <sup>D</sup> : <sup>H</sup>2(Ω−) <sup>→</sup> <sup>H</sup>3/2(C) and Neumann trace operator τ <sup>−</sup> <sup>N</sup> : <sup>H</sup><sup>2</sup>(Ω−) <sup>→</sup> <sup>H</sup><sup>1</sup>/<sup>2</sup>(C) have the same mapping properties as on a bounded domain. Moreover, both trace operators admit continuous extensions to dom T <sup>−</sup> max as in Theorem 8.3.9 and Theorem 8.3.10. With the identification (8.6.2) it is clear that the orthogonal sum

$$
\tilde{A}\_{\rm D} = \begin{pmatrix} A\_{\rm D}^{+} & 0 \\ 0 & A\_{\rm D}^{-} \end{pmatrix} \tag{8.6.3}
$$

is a self-adjoint operator in L2(Rn) with Dirichlet boundary conditions on C. The goal of the following considerations is to identify the self-adjoint Schr¨odinger operator A in (8.6.1) as a self-adjoint extension of the orthogonal sum of the minimal operators T <sup>±</sup> min and to compare <sup>A</sup> with the orthogonal sum <sup>A</sup><sup>D</sup> in (8.6.3) using a Kre˘ın type resolvent formula.

From now on it is assumed that η < essinf V is fixed, so that, in particular, <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(A<sup>+</sup> <sup>D</sup>) ∩ ρ(A<sup>−</sup> <sup>D</sup>) <sup>∩</sup> <sup>R</sup>. Consider the boundary triplet {L2(C), <sup>Γ</sup><sup>+</sup> <sup>0</sup> , Γ<sup>+</sup> <sup>1</sup> } for <sup>T</sup> <sup>+</sup> max in Theorem 8.4.1, that is,

$$
\Gamma\_0^+ f\_+ = \iota\_- \widetilde{\tau}\_{\mathcal{D}}^+ f\_+ \quad \text{and} \quad \Gamma\_1^+ f\_+ = -\iota\_+ \tau\_{\mathcal{N}}^+ f\_{\mathcal{D},+}, \quad f\_+ \in \text{dom}\, T^+\_{\text{max}},
$$

where <sup>f</sup><sup>+</sup> <sup>=</sup> <sup>f</sup>D,<sup>+</sup> <sup>+</sup> <sup>f</sup>η,<sup>+</sup> with <sup>f</sup>D,<sup>+</sup> <sup>∈</sup> dom <sup>A</sup><sup>+</sup> <sup>D</sup> and <sup>f</sup>η,<sup>+</sup> <sup>∈</sup> <sup>N</sup>η(<sup>T</sup> <sup>+</sup> max). In the same way as in the proof of Theorem 8.4.1 one verifies that {L2(C), <sup>Γ</sup><sup>−</sup> <sup>0</sup> , Γ<sup>−</sup> <sup>1</sup> }, where

$$
\Gamma\_0^- f\_- = \iota\_- \widetilde{\tau}\_\mathcal{D}^- f\_- \quad \text{and} \quad \Gamma\_1^- f\_- = -\iota\_+ \tau\_\mathcal{N}^- f\_{\mathcal{D},-}, \quad f\_- \in \text{dom}\, T^-\_{\text{max}},
$$

where f<sup>−</sup> = fD,<sup>−</sup> +fη,<sup>−</sup> with fD,<sup>−</sup> ∈ dom A<sup>−</sup> <sup>D</sup> and fη,<sup>−</sup> ∈ Nη(T <sup>−</sup> max), is a boundary triplet for T <sup>−</sup> max such that dom A<sup>−</sup> <sup>D</sup> = ker Γ<sup>−</sup> <sup>0</sup> . The γ-fields and Weyl functions in Proposition 8.4.4 corresponding to the boundary triplets {L2(C), <sup>Γ</sup><sup>±</sup> <sup>0</sup> , Γ<sup>±</sup> <sup>1</sup> } are denoted by γ<sup>±</sup> and M±, respectively.

In analogy to Section 4.6, the orthogonal coupling of the boundary triplets {L2(C), <sup>Γ</sup><sup>+</sup> <sup>0</sup> , Γ<sup>+</sup> <sup>1</sup> } and {L2(C), <sup>Γ</sup><sup>−</sup> <sup>0</sup> , Γ<sup>−</sup> <sup>1</sup> } leads to the boundary triplet

$$\left\{ L^{2}(\emptyset) \oplus L^{2}(\emptyset), \widetilde{\Gamma}\_{0}, \widetilde{\Gamma}\_{1} \right\} \tag{8.6.4}$$

for the orthogonal sum Tmax := T <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max of the maximal operator T <sup>±</sup> max, where

$$\tilde{\Gamma}\_0 f = \begin{pmatrix} \Gamma\_0^+ f\_+ \\ \Gamma\_0^- f\_- \end{pmatrix} = \begin{pmatrix} \iota\_- \tilde{\tau}\_{\mathcal{D}}^+ f\_+ \\ \iota\_- \tilde{\tau}\_{\mathcal{D}}^- f\_- \end{pmatrix}, \quad f = \begin{pmatrix} f\_+ \\ f\_- \end{pmatrix}, \quad f\_\pm \in \text{dom}\, T\_{\text{max}}^\pm,\tag{8.6.5}$$

and

$$\widetilde{\Gamma}\_1 f = \begin{pmatrix} \Gamma\_1^+ f\_+ \\ \Gamma\_1^- f\_- \end{pmatrix} = \begin{pmatrix} -\iota\_+ \tau\_\mathcal{N}^+ f\_{\mathcal{D},+} \\ -\iota\_+ \tau\_\mathcal{N}^- f\_{\mathcal{D},-} \end{pmatrix}, \quad f = \begin{pmatrix} f\_+ \\ f\_- \end{pmatrix}, \quad f\_\pm \in \text{dom}\, T\_{\text{max}}^\pm. \tag{8.6.6}$$

It is clear that

$$
\text{dom}\,A\_\text{D}^+ \times \text{dom}\,A\_\text{D}^- = \ker \widetilde{\Gamma}\_0,
$$

and hence the self-adjoint operator in (8.6.3) coincides with the self-adjoint extension of Tmin := T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min corresponding to the boundary condition ker <sup>Γ</sup><sup>0</sup>. Note also that the corresponding <sup>γ</sup>-field <sup>γ</sup> and Weyl function <sup>M</sup> <sup>5</sup> have the form

$$
\widetilde{\gamma}(\lambda) = \begin{pmatrix} \gamma\_+(\lambda) & 0 \\ 0 & \gamma\_-(\lambda) \end{pmatrix} \quad \text{and} \quad \widetilde{M}(\lambda) = \begin{pmatrix} M\_+(\lambda) & 0 \\ 0 & M\_-(\lambda) \end{pmatrix} \tag{8.6.7}
$$

for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A<sup>+</sup> <sup>D</sup>) ∩ ρ(A<sup>−</sup> <sup>D</sup>).

In Lemma 8.6.2 it will be shown that a certain relation Θ is self-adjoint in <sup>L</sup>2(C) <sup>⊕</sup> <sup>L</sup>2(C). This relation will turn out to be the boundary parameter that corresponds to the Schr¨odinger operator A in (8.6.1) via the boundary triplet (8.6.4). The following lemma on the sum of the Dirichlet-to-Neumann maps is preparatory.

**Lemma 8.6.1.** Let η < essinf <sup>V</sup> and let <sup>D</sup>±(λ) : <sup>H</sup>3/2(C) <sup>→</sup> <sup>H</sup>1/2(C) be the Dirichlet-to-Neumann maps as in Definition 8.3.6 corresponding to −Δ + V±. Then for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [η, <sup>∞</sup>) the operator

$$D\_{+}(\lambda) + D\_{-}(\lambda) : H^{3/2}(\mathbb{C}) \to H^{1/2}(\mathbb{C}) \tag{8.6.8}$$

is bijective.

Proof. First it will be shown that the operator in (8.6.8) is injective. Assume that (D+(λ)+D−(λ))<sup>ϕ</sup> = 0 for some <sup>ϕ</sup> <sup>∈</sup> <sup>H</sup>3/2(C) and some <sup>λ</sup> <sup>∈</sup> <sup>C</sup>\[η, <sup>∞</sup>). Then there exist <sup>f</sup>λ,<sup>±</sup> <sup>∈</sup> <sup>H</sup>2(Ω±) such that

$$(-\Delta + V\_{\pm})f\_{\lambda,\pm} = \lambda f\_{\lambda,\pm}, \qquad \tau\_{\rm D}^{+}f\_{\lambda,+} = \tau\_{\rm D}^{-}f\_{\lambda,-} = \varphi,\tag{8.6.9}$$

and

$$0 = \left(D\_+ (\lambda) + D\_- (\lambda)\right) \varphi = \left(D\_+ (\lambda) + D\_- (\lambda)\right) \tau\_\mathcal{D}^\pm f\_{\lambda, \pm} = \tau\_\mathcal{N}^+ f\_{\lambda, +} + \tau\_\mathcal{N}^- f\_{\lambda, -} \dots$$

As τ <sup>+</sup> <sup>D</sup> fλ,<sup>+</sup> = τ <sup>−</sup> <sup>D</sup> <sup>f</sup>λ,<sup>−</sup> and <sup>τ</sup> <sup>+</sup> <sup>N</sup> fλ,<sup>+</sup> = −τ <sup>−</sup> <sup>N</sup> fλ,<sup>−</sup> this implies that

$$f\_{\lambda} = \begin{pmatrix} f\_{\lambda,+} \\ f\_{\lambda,-} \end{pmatrix} \in H^2(\mathbb{R}^n). \tag{8.6.10}$$

In fact, for each <sup>h</sup> = (h+, h−) <sup>∈</sup> dom <sup>A</sup> <sup>=</sup> <sup>H</sup>2(Rn) one also has <sup>τ</sup> <sup>+</sup> <sup>D</sup> h<sup>+</sup> = τ <sup>−</sup> <sup>D</sup> h<sup>−</sup> and τ <sup>+</sup> <sup>N</sup> h<sup>+</sup> = −τ <sup>−</sup> <sup>N</sup> h<sup>−</sup> (note that the different signs are due to the fact that the Neumann trace on each domain is taken with respect to the outward normal vector) and hence

$$\begin{split} (Ah,f\_{\lambda})\_{L^{2}(\mathbb{R}^{n})} - (h,T\_{\max}f\_{\lambda})\_{L^{2}(\mathbb{R}^{n})} \\ &= (T\_{\max}^{+}h\_{+},f\_{\lambda,+})\_{L^{2}(\Omega\_{+})} - (h\_{+},T\_{\max}^{+}f\_{\lambda,+})\_{L^{2}(\Omega\_{+})} \\ &\quad + (T\_{\max}^{-}h\_{-},f\_{\lambda,-})\_{L^{2}(\Omega\_{-})} - (h\_{-},T\_{\max}^{-}f\_{\lambda,-})\_{L^{2}(\Omega\_{-})} \\ &= (\tau\_{\mathcal{D}}^{+}h\_{+},\tau\_{\mathcal{N}}^{+}f\_{\lambda,+})\_{L^{2}(\mathcal{C})} - (\tau\_{\mathcal{N}}^{+}h\_{+},\tau\_{\mathcal{D}}^{+}f\_{\lambda,+})\_{L^{2}(\mathcal{C})} \\ &\quad + (\tau\_{\mathcal{D}}^{-}h\_{-},\tau\_{\mathcal{N}}^{-}f\_{\lambda,-})\_{L^{2}(\mathcal{C})} - (\tau\_{\mathcal{N}}^{-}h\_{-},\tau\_{\mathcal{D}}^{-}f\_{\lambda,-})\_{L^{2}(\mathcal{C})} \\ &= 0. \end{split}$$

As the operator A is self-adjoint this shows, in particular, f<sup>λ</sup> ∈ dom A and hence (8.6.10) holds. Furthermore, from (8.6.9) it follows that

$$Af\_\lambda = (-\Delta + V)f\_\lambda = \lambda f\_\lambda.$$

Since <sup>σ</sup>(A) <sup>⊂</sup> [v−, <sup>∞</sup>) <sup>⊂</sup> [η, <sup>∞</sup>) and <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [η, <sup>∞</sup>), this implies that <sup>f</sup><sup>λ</sup> = 0 and hence ϕ = τ <sup>±</sup> <sup>D</sup> fλ,<sup>±</sup> = 0. Thus, it has been shown that the operator in (8.6.8) is injective for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [η, <sup>∞</sup>).

Next it will be shown that the operator in (8.6.8) is surjective. For this consider the space

$$H\_{\mathcal{C}}(\mathbb{R}^n) := \left\{ f = \begin{pmatrix} f\_+ \\ f\_- \end{pmatrix} : f\_{\pm} \in H^2(\Omega\_{\pm}), \ \tau\_{\mathcal{D}}^+ f\_+ = \tau\_{\mathcal{D}}^- f\_- \right\}.$$

and observe that as a consequence of (8.2.12) the mapping

$$\tau\_{\mathsf{N}}^{\mathcal{C}}: H\_{\mathsf{C}}(\mathbb{R}^n) \to H^{1/2}(\mathbb{C}), \qquad f \mapsto \tau\_{\mathsf{N}}^{\mathcal{C}}f := \tau\_{\mathsf{N}}^+ f\_+ + \tau\_{\mathsf{N}}^- f\_-,$$

is surjective. For <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [η, <sup>∞</sup>) it will be shown now that the direct sum decomposition

$$H\_{\mathcal{C}}(\mathbb{R}^n) = \text{dom}\,A + \left\{ f\_{\lambda} = \begin{pmatrix} f\_{\lambda,+} \\ f\_{\lambda,-} \end{pmatrix} : \begin{array}{l} f\_{\lambda, \pm} \in H^2(\Omega\_{\pm}), \ \tau\_{\mathcal{D}}^+ f\_{\lambda,+} = \tau\_{\mathcal{D}}^- f\_{\lambda,-}, \\ (-\Delta + V\_{\pm}) f\_{\lambda, \pm} = \lambda f\_{\lambda, \pm} \end{array} \right\}$$

holds. In fact, the inclusion (⊃) is clear since dom <sup>A</sup> <sup>=</sup> <sup>H</sup>2(Rn) and the second summand on the right-hand side is obviously contained in HC(Rn). The inclusion (⊂) follows from Theorem 1.7.1 applied to <sup>T</sup> <sup>=</sup> <sup>−</sup>Δ + <sup>V</sup> , dom <sup>T</sup> <sup>=</sup> <sup>H</sup>C(Rn), after observing that the space

$$\left\{ f\_{\lambda} = \begin{pmatrix} f\_{\lambda,+} \\ f\_{\lambda,-} \end{pmatrix} : \begin{matrix} f\_{\lambda,\pm} \in H^2(\Omega\_{\pm}), \ \tau\_{\text{D}}^+ f\_{\lambda,+} = \tau\_{\text{D}}^- f\_{\lambda,-}, \\ (-\Delta + V\_{\pm}) f\_{\lambda,\pm} = \lambda f\_{\lambda,\pm} \end{matrix} \right\} \tag{8.6.11}$$

coincides with Nλ(T) = ker (T −λ) and λ ∈ ρ(A). Note also that λ ∈ ρ(A) implies that the sum is direct.

Next observe that for <sup>f</sup> <sup>∈</sup> dom <sup>A</sup> one has <sup>τ</sup> <sup>C</sup> <sup>N</sup>f = 0 and hence also the restriction of τ <sup>C</sup> <sup>N</sup> to the space (8.6.11) maps onto <sup>H</sup>1/2(C). Therefore, for <sup>ψ</sup> <sup>∈</sup> <sup>H</sup>1/2(C) there exists <sup>f</sup><sup>λ</sup> = (fλ,+, fλ,−) such that <sup>f</sup>λ,<sup>±</sup> <sup>∈</sup> <sup>H</sup>2(Ω±), (−Δ+V±)fλ,<sup>±</sup> <sup>=</sup> λfλ,±,

$$
\tau\_\mathcal{D}^+ f\_{\lambda,+} = \tau\_\mathcal{D}^- f\_{\lambda,-} =: \varphi \in H^{3/2}(\mathbb{C}) \quad \text{and} \quad \tau\_\mathcal{N}^\mathcal{C} f\_\lambda = \tau\_\mathcal{N}^+ f\_{\lambda,+} + \tau\_\mathcal{N}^- f\_{\lambda,-} = \psi.
$$

It follows that

$$\left(D\_{+}(\lambda) + D\_{-}(\lambda)\right)\varphi = D\_{+}(\lambda)\tau\_{\mathcal{D}}^{+}f\_{\lambda,+} + D\_{-}(\lambda)\tau\_{\mathcal{D}}^{-}f\_{\lambda,-} = \tau\_{\mathcal{N}}^{+}f\_{\lambda,+} + \tau\_{\mathcal{N}}^{-}f\_{\lambda,-} = \psi,$$

and hence the operator in (8.6.8) is surjective.

Consequently, it has been shown that (8.6.8) is a bijective operator for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [η, <sup>∞</sup>). -

#### 8.6. Coupling of Schr¨odinger operators 621

Lemma 8.6.1 will be used to prove the following lemma on the self-adjointness of a particular relation Θ in <sup>L</sup><sup>2</sup>(C) <sup>⊕</sup> <sup>L</sup><sup>2</sup>(C).

**Lemma 8.6.2.** Let η < essinf <sup>V</sup> and let <sup>D</sup>±(η) : <sup>H</sup><sup>3</sup>/<sup>2</sup>(C) <sup>→</sup> <sup>H</sup><sup>1</sup>/<sup>2</sup>(C) be the Dirichlet-to-Neumann maps as in Definition 8.3.6 corresponding to −Δ + V±. Then the relation

$$\tilde{\Theta} = \left\{ \left\{ \begin{pmatrix} \xi \\ \xi \end{pmatrix}, \begin{pmatrix} \varphi \\ \psi \end{pmatrix} \right\} : \xi \in H^2(\mathbb{C}), \varphi + \psi = \iota\_+ \left( D\_+(\eta) + D\_-(\eta) \right) \iota\_-^{-1} \xi \right\}.$$

is self-adjoint in <sup>L</sup>2(C) <sup>⊕</sup> <sup>L</sup>2(C).

Proof. Recall first from Example 8.4.7 that ι+D+(η)ι −1 <sup>−</sup> and <sup>ι</sup>+D−(η)<sup>ι</sup> −1 <sup>−</sup> are both unbounded bijective self-adjoint operators in L2(C) with domain H2(C). Since η < essinf V , one also sees from (8.4.18) that these operators are nonnegative. It follows, in particular, that ι+(D+(η) + D−(η))ι −1 <sup>−</sup> is a symmetric operator in L2(C). Since

$$D\_+(\eta) + D\_-(\eta) : H^{3/2}(\mathbb{C}) \to H^{1/2}(\mathbb{C})$$

is bijective by Lemma 8.6.1 and the restricted operators ι −1 <sup>−</sup> : <sup>H</sup>2(C) <sup>→</sup> <sup>H</sup>3/2(C) and <sup>ι</sup><sup>+</sup> : <sup>H</sup>1/2(C) <sup>→</sup> <sup>L</sup>2(C) are also bijective, one concludes that

$$
\mu\_+ (D\_+ (\eta) + D\_- (\eta)) \iota\_-^{-1} \tag{8.6.12}
$$

is a uniformly positive self-adjoint operator in L2(C) defined on H2(C).

To show that <sup>Θ</sup> <sup>⊂</sup> <sup>Θ</sup> <sup>∗</sup>, consider two arbitrary elements

$$\left\{ \begin{pmatrix} \xi \\ \xi \end{pmatrix}, \begin{pmatrix} \varphi \\ \psi \end{pmatrix} \right\}, \left\{ \begin{pmatrix} \xi' \\ \xi' \end{pmatrix}, \begin{pmatrix} \varphi' \\ \psi' \end{pmatrix} \right\} \in \tilde{\Theta},$$

that is, ξ, ξ-<sup>∈</sup> <sup>H</sup>2(C),

$$
\varphi + \psi = \iota\_+ \left( D\_+ (\eta) + D\_- (\eta) \right) \iota\_-^{-1} \xi \quad \text{and} \quad \varphi' + \psi' = \iota\_+ \left( D\_+ (\eta) + D\_- (\eta) \right) \iota\_-^{-1} \xi' .
$$

Then one computes

$$\begin{split} & \left( \left( \begin{matrix} \xi \\ \xi \end{matrix} \right), \begin{pmatrix} \varphi' \\ \psi' \end{pmatrix} \right)\_{(L^2(\mathcal{C}))^2} - \left( \begin{pmatrix} \varphi \\ \psi \end{pmatrix}, \begin{pmatrix} \xi' \\ \xi' \end{pmatrix} \right)\_{(L^2(\mathcal{C}))^2} \\ &= \langle \xi, \varphi' + \psi' \rangle\_{L^2(\mathcal{C})} - \left( \varphi + \psi, \xi' \right)\_{L^2(\mathcal{C})} \\ &= \left( \xi, \iota\_+ \left( D\_+(\eta) + D\_-(\eta) \right) \iota\_-^{-1} \xi' \right)\_{L^2(\mathcal{C})} - \left( \iota\_+ \left( D\_+(\eta) + D\_-(\eta) \right) \iota\_-^{-1} \xi, \xi' \right)\_{L^2(\mathcal{C})} \\ &= 0, \end{split}$$

where in the last step it was used that (8.6.12) is a symmetric operator in L2(C). Hence, the relation Θ is symmetric in <sup>L</sup>2(C). For the opposite inclusion <sup>Θ</sup> <sup>∗</sup> <sup>⊂</sup> <sup>Θ</sup> consider an element

$$\left\{ \begin{pmatrix} \alpha\\ \beta \end{pmatrix}, \begin{pmatrix} \gamma\\ \delta \end{pmatrix} \right\} \in \tilde{\Theta}^\*,\tag{8.6.13}$$

that is,

$$\left( \begin{pmatrix} \alpha \\ \beta \end{pmatrix}, \begin{pmatrix} \varphi \\ \psi \end{pmatrix} \right)\_{(L^2(\mathcal{C}))^2} = \left( \begin{pmatrix} \gamma \\ \delta \end{pmatrix}, \begin{pmatrix} \xi \\ \xi \end{pmatrix} \right)\_{(L^2(\mathcal{C}))^2} \tag{8.6.14}$$
 
$$\left\{ \begin{pmatrix} \xi \\ \xi \end{pmatrix}, \begin{pmatrix} \varphi \\ \psi \end{pmatrix} \right\} \in \tilde{\Theta}.$$

holds for all

ξ ψ The special choice <sup>ξ</sup> = 0 yields <sup>ϕ</sup> <sup>+</sup> <sup>ψ</sup> = 0 by the definition of Θ and hence (<sup>α</sup> <sup>−</sup> β,ϕ)L2(C) = 0 for all <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup>2(C). This shows <sup>α</sup> <sup>=</sup> <sup>β</sup> and therefore (8.6.14) becomes

$$\left(\alpha, \iota + \left(D\_+(\eta) + D\_-(\eta)\right)\iota\_-^{-1}\xi\right)\_{L^2(\mathcal{C})} = (\alpha, \varphi + \psi)\_{L^2(\mathcal{C})} = (\gamma + \delta, \xi)\_{L^2(\mathcal{C})}$$

for all <sup>ξ</sup> <sup>∈</sup> <sup>H</sup>2(C). Since <sup>ι</sup>+(D+(η) + <sup>D</sup>−(η))<sup>ι</sup> −1 <sup>−</sup> is a self-adjoint operator in <sup>L</sup>2(C) defined on <sup>H</sup>2(C), it follows that <sup>α</sup> <sup>∈</sup> <sup>H</sup>2(C) and

$$\iota\_+ \left( D\_+ (\eta) + D\_- (\eta) \right) \iota\_-^{-1} \alpha = \gamma + \delta.$$

This implies that the element in (8.6.13) belongs to Θ. Thus, Θ is a self-adjoint relation in <sup>L</sup>2(C) <sup>⊕</sup> <sup>L</sup>2(C). -

The following theorem is the main result in this section. It turns out that the self-adjoint operator corresponding to Θ in Lemma 8.6.2 coincides with the Schr¨odinger operator A.

**Theorem 8.6.3.** Let {L2(C)⊕L2(C),Γ0,Γ1} be the boundary triplet for <sup>T</sup> <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max from (8.6.4) with <sup>γ</sup>-field <sup>γ</sup>, let <sup>Θ</sup> be the self-adjoint relation in Lemma 8.6.2, and let D±(λ) be the Dirichlet-to-Neumann maps corresponding to −Δ + V±. Then the self-adjoint operator <sup>A</sup> <sup>Θ</sup> corresponding to the parameter <sup>Θ</sup> coincides with the Schr¨odinger operator <sup>A</sup> in (8.6.1) and for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [η, <sup>∞</sup>) one has the resolvent formula

$$(A - \lambda)^{-1} = (\tilde{A}\_{\rm D} - \lambda)^{-1} + \tilde{\gamma}(\lambda)\tilde{\Lambda}(\lambda)\tilde{\gamma}(\tilde{\lambda})^\*,$$

where Λ( <sup>λ</sup>) <sup>∈</sup> **<sup>B</sup>**(L2(C) <sup>⊕</sup> <sup>L</sup>2(C)) has the form

$$
\tilde{\Lambda}(\lambda) = \begin{pmatrix}
\iota\_- (D\_+ (\lambda) + D\_- (\lambda))^{-1} \iota\_+^{-1} & \iota\_- (D\_+ (\lambda) + D\_- (\lambda))^{-1} \iota\_+^{-1} \\
\iota\_- (D\_+ (\lambda) + D\_- (\lambda))^{-1} \iota\_+^{-1} & \iota\_- (D\_+ (\lambda) + D\_- (\lambda))^{-1} \iota\_+^{-1}
\end{pmatrix}.
$$

Proof. First it will be shown that the self-adjoint extension <sup>A</sup> <sup>Θ</sup> and the self-adjoint Schr¨odinger operator A in (8.6.1) coincide. Since both operators are self-adjoint, it suffices to verify the inclusion <sup>A</sup> <sup>⊂</sup> <sup>A</sup> <sup>Θ</sup> . For this, consider <sup>f</sup> <sup>∈</sup> dom <sup>A</sup> <sup>=</sup> <sup>H</sup>2(Rn) and note that <sup>f</sup> = (f+, f−) satisfies <sup>τ</sup> <sup>+</sup> <sup>D</sup> f<sup>+</sup> = τ <sup>−</sup> <sup>D</sup> <sup>f</sup><sup>−</sup> and <sup>τ</sup> <sup>+</sup> <sup>N</sup> f<sup>+</sup> = −τ <sup>−</sup> <sup>N</sup> f−. It will be shown that {Γ0f, <sup>Γ</sup>1f} ∈ Θ. By the definition of the boundary mappings <sup>Γ</sup><sup>0</sup> and <sup>Γ</sup><sup>1</sup> in (8.6.5)–(8.6.6), one has

$$
\widetilde{\Gamma}\_0 f = \begin{pmatrix} \iota\_- \widetilde{\tau}\_{\mathcal{D}}^+ f\_+ \\ \iota\_- \widetilde{\tau}\_{\mathcal{D}}^- f\_- \end{pmatrix} \quad \text{and} \quad \widetilde{\Gamma}\_1 f = \begin{pmatrix} -\iota\_+ \tau\_{\mathcal{N}}^+ f\_{\mathcal{D},+} \\ -\iota\_+ \tau\_{\mathcal{N}}^- f\_{\mathcal{D},-} \end{pmatrix} =: \begin{pmatrix} \varphi \\ \psi \end{pmatrix}.
$$

and, as <sup>f</sup> <sup>∈</sup> <sup>H</sup><sup>2</sup>(R<sup>n</sup>), it follows that

$$\xi := \iota\_- \tau\_\mathcal{D}^+ f\_+ = \iota\_- \tilde{\tau}\_\mathcal{D}^+ f\_+ = \iota\_- \tilde{\tau}\_\mathcal{D}^- f\_- = \iota\_- \tau\_\mathcal{D}^- f\_- \in H^2(\mathbb{C}).$$

Since f<sup>±</sup> = fD,<sup>±</sup> + fη,<sup>±</sup> with fD,<sup>±</sup> ∈ dom A<sup>±</sup> <sup>D</sup> and fη,<sup>±</sup> ∈ Nη(T <sup>±</sup> max), one has τ ± <sup>D</sup> f<sup>±</sup> = τ <sup>±</sup> <sup>D</sup> fη,<sup>±</sup> and one concludes that

$$\begin{split} \iota\_{+}\left(D\_{+}(\eta)+D\_{-}(\eta)\right)\iota\_{-}^{-1}\xi &= \iota\_{+}\left(D\_{+}(\eta)\tau\_{\rm D}^{+}f\_{\eta,+}+D\_{-}(\eta)\tau\_{\rm D}^{-}f\_{\eta,-}\right) \\ &= \iota\_{+}\left(\tau\_{\rm N}^{+}f\_{\eta,+}+\tau\_{\rm N}^{-}f\_{\eta,-}\right) \\ &= \iota\_{+}\left(\tau\_{\rm N}^{+}f\_{+}+\tau\_{\rm N}^{-}f\_{-}-\tau\_{\rm N}^{+}f\_{\rm D,+}-\tau\_{\rm N}^{-}f\_{\rm D,-}\right) \\ &= -\iota\_{+}\tau\_{\rm N}^{+}f\_{\rm D,+}-\iota\_{+}\tau\_{\rm N}^{-}f\_{\rm D,-} \\ &= \varphi+\psi, \end{split}$$

where the property τ <sup>+</sup> <sup>N</sup> f<sup>+</sup> = −τ <sup>−</sup> <sup>N</sup> f<sup>−</sup> for f ∈ dom A was used. These considerations imply {Γ0f, <sup>Γ</sup>1f} ∈ Θ and thus <sup>f</sup> <sup>∈</sup> dom <sup>A</sup> <sup>Θ</sup> . Therefore, dom <sup>A</sup> <sup>=</sup> <sup>H</sup>2(Rn) is contained in dom <sup>A</sup> <sup>Θ</sup> , and since both operators are self-adjoint, it follows that they coincide, that is, <sup>A</sup> <sup>=</sup> <sup>A</sup> Θ .

As a consequence of Theorem 2.6.1 one has for <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A) <sup>∩</sup> <sup>ρ</sup>(AD) that

$$(A - \lambda)^{-1} = (\widetilde{A}\_{\mathcal{D}} - \lambda)^{-1} + \widetilde{\gamma}(\lambda) \left(\widetilde{\Theta} - \widetilde{M}(\lambda)\right)^{-1} \widetilde{\gamma}(\overline{\lambda})^\*,$$

where <sup>γ</sup> and <sup>M</sup> <sup>5</sup> are the <sup>γ</sup>-field and Weyl function, respectively, of the boundary triplet {L2(C) <sup>⊕</sup> <sup>L</sup>2(C), <sup>Γ</sup>0, <sup>Γ</sup>1} in (8.6.7). Here it is also clear from Theorem 2.6.1 that

$$\left(\widetilde{\Theta} - \widetilde{M}(\lambda)\right)^{-1} \in \mathbf{B}(L^2(\mathbb{C}) \oplus L^2(\mathbb{C})), \qquad \lambda \in \rho(A) \cap \rho(\widetilde{A}\_{\mathcal{D}}).$$

From now consider only <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [η, <sup>∞</sup>). It follows from Lemma 8.6.2 and (8.6.7) that

$$\begin{aligned} & \left(\widetilde{\Theta} - \widetilde{M}(\lambda)\right)^{-1} \\ &= \left\{ \left\{ \begin{pmatrix} \varphi - M\_+(\lambda)\xi \\ \psi - M\_-(\lambda)\xi \end{pmatrix}, \begin{pmatrix} \xi \\ \xi \end{pmatrix} \right\} : \begin{array}{l} \xi \in H^2(\mathbb{C}), \\ \varphi + \psi = \iota\_+(D\_+(\eta) + D\_-(\eta))\iota\_-^{-1}\xi \end{array} \right\}, \end{aligned}$$

and setting ϑ<sup>1</sup> = ϕ − M+(λ)ξ and ϑ<sup>2</sup> = ψ − M−(λ)ξ one obtains

$$\begin{aligned} \vartheta\_1 + \vartheta\_2 &= \varphi + \psi - M\_+ (\lambda) \xi - M\_- (\lambda) \xi \\ &= \iota\_+ \left( D\_+ (\eta) + D\_- (\eta) \right) \iota\_-^{-1} \xi - M\_+ (\lambda) \xi - M\_- (\lambda) \xi . \end{aligned}$$

Since M±(λ)ξ = ι+(D±(η)−D±(λ))ι −1 <sup>−</sup> <sup>ξ</sup> for <sup>ξ</sup> <sup>∈</sup> <sup>H</sup>2(C) by Lemma 8.4.5, it follows that

$$
\vartheta\_1 + \vartheta\_2 = \iota\_+ \big( D\_+ (\lambda) + D\_- (\lambda) \big) \iota\_-^{-1} \xi.
$$

Lemma 8.6.1 implies that ι+(D+(λ) + D−(λ))ι −1 <sup>−</sup> is a bijective operator in <sup>L</sup>2(C) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [η, <sup>∞</sup>) and hence

$$
\mu\_- \left( D\_+ (\lambda) + D\_- (\lambda) \right)^{-1} \iota\_+^{-1} \vartheta\_1 + \iota\_- \left( D\_+ (\lambda) + D\_- (\lambda) \right)^{-1} \iota\_+^{-1} \vartheta\_2 = \xi.
$$

Therefore, one has

$$\left(\widetilde{\Theta} - \widetilde{M}(\lambda)\right)^{-1} = \begin{pmatrix} \iota\_- (D\_+(\lambda) + D\_-(\lambda))^{-1} \iota\_+^{-1} & \iota\_- (D\_+(\lambda) + D\_-(\lambda))^{-1} \iota\_+^{-1} \\ \iota\_- (D\_+(\lambda) + D\_-(\lambda))^{-1} \iota\_+^{-1} & \iota\_- (D\_+(\lambda) + D\_-(\lambda))^{-1} \iota\_+^{-1} \end{pmatrix}.$$

This completes the proof of Theorem 8.6.3. -

Finally, the boundary triplet in (8.6.4) is modified in the same way as in Proposition 4.6.4 to interpret the Schr¨odinger operator A as the self-adjoint extension corresponding to the boundary mapping <sup>Γ</sup>0. More precisely, the boundary triplets {L2(C), <sup>Γ</sup><sup>+</sup> <sup>0</sup> , Γ<sup>+</sup> <sup>1</sup> } and {L2(C), <sup>Γ</sup><sup>−</sup> <sup>0</sup> , Γ<sup>−</sup> <sup>1</sup> } lead to the boundary triplet

$$\left\{L^{2}(\emptyset)\oplus L^{2}(\emptyset), \widehat{\Gamma}\_{0}, \widehat{\Gamma}\_{1}\right\} \tag{8.6.15}$$

for Tmax = T <sup>+</sup> max <sup>⊕</sup> <sup>T</sup> <sup>−</sup> max, where

$$
\widehat{\Gamma}\_0 f = \begin{pmatrix} -\Gamma\_1^+ f\_+ - \Gamma\_1^- f\_- \\ \Gamma\_0^+ f\_+ - \Gamma\_0^- f\_- \end{pmatrix} = \begin{pmatrix} \iota\_+ (\tau\_\mathcal{N}^+ f\_{\mathcal{D},+} + \tau\_\mathcal{N}^- f\_{\mathcal{D},-}) \\ \iota\_- (\widetilde{\tau}\_\mathcal{D}^+ f\_+ - \widetilde{\tau}\_\mathcal{D}^- f\_-) \end{pmatrix}.
$$

and

$$
\widehat{\Gamma}\_1 f = \begin{pmatrix} \Gamma\_0^+ f\_+ \\ -\Gamma\_1^- f\_- \end{pmatrix} = \begin{pmatrix} \iota\_- \widetilde{\tau}\_{\mathcal{D}}^+ f\_+ \\ \iota\_+ \tau\_{\mathcal{N}}^- f\_{\mathcal{D},-} \end{pmatrix}.
$$

for f = (f+, f−) with f<sup>±</sup> ∈ dom T <sup>±</sup> max. It follows from Proposition 4.6.4 that the Schr¨odinger operator A = −Δ +V in (8.6.1) coincides with the self-adjoint extension defined on ker <sup>Γ</sup><sup>0</sup> and that the Weyl function corresponding to the boundary triplet in (8.6.15) is given by

$$
\widehat{M}(\lambda) = -\begin{pmatrix} M\_+(\lambda) & -I \\ -I & -M\_-(\lambda)^{-1} \end{pmatrix}^{-1}, \qquad \lambda \in \mathbb{C} \ \backslash \mathbb{R},
$$

where <sup>M</sup>±(λ) = <sup>ι</sup>+(D±(η) <sup>−</sup> <sup>D</sup>±(λ))<sup>ι</sup> −1 <sup>−</sup> is the Weyl function corresponding to the boundary triplet {L2(C), <sup>Γ</sup><sup>±</sup> <sup>0</sup> , Γ<sup>±</sup> <sup>1</sup> }; cf. Proposition 8.4.4 and Lemma 8.4.5. In particular, the results in Section 3.5 and Section 3.6 can be used to describe the isolated and embedded eigenvalues, continuous, and absolutely continuous spectrum of A with the help of the limit properties of the Dirichlet-to-Neumann maps <sup>D</sup>±. For this, however, one has to ensure that the underlying minimal operator Tmin = T <sup>+</sup> min <sup>⊕</sup> <sup>T</sup> <sup>−</sup> min is simple, which follows from Proposition 8.3.13 and [120, Proposition 2.2].

## **8.7 Bounded Lipschitz domains**

In this last section Schr¨odinger operators −Δ+V with a real function V ∈ L∞(Ω) on bounded Lipschitz domains are briefly discussed. This situation is more general than the setting of bounded C2-domains treated in the previous sections. The main objective here is to highlight the differences to the C2-case and to indicate which methods have to be adapted in order to obtain results of similar nature as above.

The notions of a Lipschitz hypograph and a bounded Lipschitz domain are defined in the same way as C<sup>2</sup>-hypographs and bounded C<sup>2</sup>-domains in Section 8.2. More precisely, for a Lipschitz continuous function <sup>φ</sup> : <sup>R</sup><sup>n</sup>−<sup>1</sup> <sup>→</sup> <sup>R</sup> the domain

$$\Omega\_{\phi} := \left\{ (x', x\_n)^\top \in \mathbb{R}^n : x\_n < \phi(x') \right\},$$

is called a Lipschitz hypograph with boundary ∂Ω. The surface integral and surface measure on ∂Ω<sup>φ</sup> are defined in the same way as in (8.2.4), and this leads to the <sup>L</sup><sup>2</sup>-space <sup>L</sup><sup>2</sup>(∂Ωφ) on <sup>∂</sup>Ωφ. For <sup>s</sup> <sup>∈</sup> [0, 1] define the Sobolev space of order <sup>s</sup> on ∂Ω<sup>φ</sup> by

$$H^s(\partial \Omega\_\phi) := \left\{ h \in L^2(\partial \Omega\_\phi) : x' \mapsto h(x', \phi(x')) \in H^s(\mathbb{R}^{n-1}) \right\}$$

and equip Hs(∂Ωφ) with the corresponding scalar product (8.2.6).

**Definition 8.7.1.** A bounded nonempty open subset Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> is called a Lipschitz domain if there exist open sets <sup>U</sup>1,...,U<sup>l</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> and (possibly up to rotations of coordinates) Lipschitz hypographs Ω1,..., <sup>Ω</sup><sup>l</sup> <sup>⊂</sup> <sup>R</sup>n, such that

$$
\partial \Omega \subset \bigcup\_{j=1}^{l} U\_j \quad \text{and} \quad \Omega \cap U\_j = \Omega\_j \cap U\_j, \quad j = 1, \dots, l.
$$

For a bounded Lipschitz domain Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> the boundary <sup>∂</sup><sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is compact. Using a partition of unity subordinate to the open cover {Uj} of ∂Ω one defines the surface integral, surface measure, and the L2-space L2(∂Ω) in the same way as in Section 8.2. The Sobolev space <sup>H</sup>s(∂Ω) for <sup>s</sup> <sup>∈</sup> [0, 1] is then defined by

$$H^s(\partial \Omega) := \left\{ h \in L^2(\partial \Omega) : \eta\_j h \in H^s(\partial \Omega\_j), \ j = 1, \dots, l \right\}$$

and equipped with the corresponding Hilbert space scalar product (8.2.7). It follows that <sup>H</sup>s(∂Ω), <sup>s</sup> <sup>∈</sup> [0, 1], is densely and continuously embedded in <sup>L</sup>2(∂Ω), and the embedding H<sup>t</sup> (∂Ω) <sup>→</sup> <sup>H</sup>s(∂Ω) is compact for s<t <sup>≤</sup> 1. As in Section 8.2, the spaces <sup>H</sup>s(∂Ω), <sup>s</sup> <sup>∈</sup> [0, 1], can be defined in an equivalent way via interpolation. The dual space of the antilinear continuous functionals on Hs(∂Ω) is denoted by <sup>H</sup>−s(∂Ω), <sup>s</sup> <sup>∈</sup> [0, 1].

For a bounded Lipschitz domain Ω define the spaces

$$H^s\_\Delta(\Omega) := \left\{ f \in H^s(\Omega) : \Delta f \in L^2(\Omega) \right\}, \qquad s \ge 0,$$

and equip them with the Hilbert space scalar product

$$(f,g)\_{H^s\_\Delta(\Omega)} := (f,g)\_{H^s(\Omega)} + (\Delta f, \Delta g)\_{L^2(\Omega)}, \qquad f, g \in H^s\_\Delta(\Omega). \tag{8.7.1}$$

It is clear that Hs(Ω) = H<sup>s</sup> <sup>Δ</sup>(Ω) for <sup>s</sup> <sup>≥</sup> 2 and that <sup>H</sup><sup>0</sup> <sup>Δ</sup>(Ω) = dom Tmax for s = 0, with (8.7.1) as the graph norm; cf. (8.3.3). The unit normal vector field pointing outwards on ∂Ω will again be denoted by ν. It is known that the Dirichlet trace mapping C∞(Ω) f → f|<sup>∂</sup><sup>Ω</sup> extends by continuity to a continuous surjective mapping

$$\tau\_{\mathcal{D}} : H^s\_{\Delta}(\Omega) \to H^{s-1/2}(\partial \Omega), \qquad \frac{1}{2} \le s \le \frac{3}{2},$$

and that the Neumann trace mapping C∞(Ω) f → ν · ∇f|<sup>∂</sup><sup>Ω</sup> extends by continuity to a continuous surjective mapping

$$\tau\_{\mathsf{N}} : H^s\_{\Delta}(\Omega) \to H^{s-3/2}(\partial \Omega), \qquad \frac{1}{2} \le s \le \frac{3}{2};$$

cf. [92, 326]. For the present purposes it is particularly useful to note that the mappings

$$\tau\_{\mathsf{D}} : H^{3/2}\_{\Delta}(\Omega) \to H^1(\partial \Omega) \quad \text{and} \quad \tau\_{\mathsf{N}} : H^{3/2}\_{\Delta}(\Omega) \to L^2(\partial \Omega) \tag{8.7.2}$$

are both continuous and surjective. Furthermore, the first and second Green identities remain true in the natural form, that is,

$$(-\Delta f, g)\_{L^2(\Omega)} = (\nabla f, \nabla g)\_{L^2(\Omega; \mathbb{C}^n)} - \left< \tau\_\mathcal{N} f, \tau\_\mathcal{D} g \right>\_{H^{-1/2}(\partial \Omega) \times H^{1/2}(\partial \Omega)}$$

and

$$\begin{aligned} & \left\langle ( - \Delta f, g)\_{L^2(\Omega)} - (f, - \Delta g)\_{L^2(\Omega)} \right\rangle \\ &= \left\langle \tau\_{\mathrm{D}} f, \tau\_{\mathrm{N}} g \right\rangle\_{H^{1/2}(\partial \Omega) \times H^{-1/2}(\partial \Omega)} - \left\langle \tau\_{\mathrm{N}} f, \tau\_{\mathrm{D}} g \right\rangle\_{H^{-1/2}(\partial \Omega) \times H^{1/2}(\partial \Omega)} \end{aligned}$$

hold for all f,g <sup>∈</sup> <sup>H</sup><sup>1</sup> <sup>Δ</sup>(Ω).

The minimal operator Tmin and maximal operator Tmax associated with −Δ + V on a bounded Lipschitz domain are defined in exactly the same way as in the beginning of Section 8.3. The assertions T <sup>∗</sup> min = Tmax and Tmin = T <sup>∗</sup> max in Proposition 8.3.1 remain valid in the present situation. Furthermore, the Dirichlet realization A<sup>D</sup> and Neumann realization A<sup>N</sup> of −Δ + V are defined as in Section 8.3, and their properties are the same as in Proposition 8.3.2 and Proposition 8.3.3. The first remarkable and substantial difference for Schr¨odinger operators on a bounded Lipschitz domain appears in connection with the regularity of the domains of A<sup>D</sup> and A<sup>N</sup> when comparing with Theorem 8.3.4. In the present case one has the following regularity result from [431, 432], see also [92, 323].

**Theorem 8.7.2.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded Lipschitz domain. Then one has

$$A\_{\mathcal{D}}f = -\Delta f + Vf, \quad \text{dom}\, A\_{\mathcal{D}} = \left\{ f \in H\_{\Delta}^{3/2}(\Omega) : \tau\_{\mathcal{D}}f = 0 \right\},$$

and

$$A\_{\mathcal{N}}f = -\Delta f + Vf,\quad \text{dom}\,A\_{\mathcal{N}} = \left\{ f \in H\_{\Delta}^{3/2}(\Omega) : \tau\_{\mathcal{N}}f = 0 \right\}.$$

The same reasoning as in Section 8.3 one obtains the following useful decomposition of the space H3/<sup>2</sup> <sup>Δ</sup> (Ω).

**Corollary 8.7.3.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded Lipschitz domain. Then for λ ∈ ρ(AD) one has the direct sum decomposition

$$\begin{aligned} H\_{\Delta}^{3/2}(\Omega) &= \text{dom}\,A\_{\mathcal{D}} + \left\{ f\_{\lambda} \in H\_{\Delta}^{3/2}(\Omega) : (-\Delta + V)f\_{\lambda} = \lambda f\_{\lambda} \right\}, \\ &= \ker \tau\_{\mathcal{D}} + \left\{ f\_{\lambda} \in H\_{\Delta}^{3/2}(\Omega) : (-\Delta + V)f\_{\lambda} = \lambda f\_{\lambda} \right\}, \end{aligned}$$

and for λ ∈ ρ(AN) one has the direct sum decomposition

$$\begin{split} H^{3/2}\_{\Delta}(\Omega) &= \text{dom}\,A\_{\mathcal{N}} + \left\{ f\_{\lambda} \in H^{3/2}\_{\Delta}(\Omega) : (-\Delta + V)f\_{\lambda} = \lambda f\_{\lambda} \right\} \\ &= \text{ker}\,\tau\_{\mathcal{N}} + \left\{ f\_{\lambda} \in H^{3/2}\_{\Delta}(\Omega) : (-\Delta + V)f\_{\lambda} = \lambda f\_{\lambda} \right\}. \end{split}$$

For a bounded Lipschitz domain and λ ∈ ρ(AD) the Dirichlet-to-Neumann map is defined as

$$D(\lambda): H^1(\partial \Omega) \to L^2(\partial \Omega), \qquad \tau\_{\mathcal{D}} f\_{\lambda} \mapsto \tau\_{\mathcal{N}} f\_{\lambda}, \tag{8.7.3}$$

where <sup>f</sup><sup>λ</sup> <sup>∈</sup> <sup>H</sup>3/<sup>2</sup> <sup>Δ</sup> (Ω) is such that (−Δ+V )f<sup>λ</sup> = λfλ. This definition is the natural analog of Definition 8.3.6, taking into account the decomposiion in (8.7.3). As before, it follows that for λ ∈ ρ(AD)∩ρ(AN) the Dirichlet-to-Neumann map (8.7.3) is a bijective operator.

For completeness the following a priori estimates are stated.

**Corollary 8.7.4.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded Lipschitz domain. Then there exist constants C<sup>D</sup> > 0 and C<sup>N</sup> > 0 such that

$$\|f\|\_{H^{3/2}\_{\Delta}(\Omega)} \le C\_{\mathcal{D}} \left( \|f\|\_{L^2(\Omega)} + \|A\_{\mathcal{D}}f\|\_{L^2(\Omega)} \right), \quad f \in \text{dom}\, A\_{\mathcal{D}},$$

and

$$\|g\|\_{H^{3/2}\_{\Delta}(\Omega)} \le C\_{\mathcal{N}} \left( \|g\|\_{L^2(\Omega)} + \|A\_{\mathcal{N}}g\|\_{L^2(\Omega)} \right) \quad g \in \text{dom}\, A\_{\mathcal{N}}.$$

Next a variant of Theorem 8.3.9 and Theorem 8.3.10 on the extensions of the Dirichlet and Neumann trace operators to dom Tmax = H<sup>0</sup> <sup>Δ</sup>(Ω) for bounded Lipschitz domains is formulated. For this consider the spaces

$$\mathcal{H}\_0 := \left\{ \tau\_{\mathcal{D}} f : f \in \text{dom}\, A\_{\mathcal{N}} \right\} \quad \text{and} \quad \mathcal{G}\_1 := \left\{ \tau\_{\mathcal{N}} g : g \in \text{dom}\, A\_{\mathcal{D}} \right\},\tag{8.7.4}$$

and note that for the special case of a bounded C2-domain the spaces G<sup>0</sup> and G<sup>1</sup> coincide with the spaces H3/2(∂Ω) and H1/2(∂Ω), respectively. The spaces G<sup>0</sup> and G<sup>1</sup> are dense in L2(∂Ω) and, equipped with the scalar products

$$\begin{aligned} (\varphi, \psi)\varrho\_0 &:= (\Sigma^{-1/2}\varphi, \Sigma^{-1/2}\psi)\_{L^2(\partial\Omega)}, & \Sigma &= \text{Im}\left(D(i)^{-1}\right), \\ (\varphi, \psi)\varrho\_1 &:= (\Lambda^{-1/2}\varphi, \Lambda^{-1/2}\psi)\_{L^2(\partial\Omega)}, & \Lambda &= -\overline{\text{Im}\,D(i)}, \end{aligned} \tag{8.7.5}$$

they are Hilbert spaces, as was shown in [92, 115]; here both Σ−1/<sup>2</sup> and Λ−1/<sup>2</sup> are unbounded nonnegative self-adjoint operators in L2(∂Ω). The corresponding dual spaces of antilinear continuous functionals are denoted by G - <sup>0</sup> and G - <sup>1</sup>, respectively, and one obtains Gelfand triples {Gi, L<sup>2</sup>(∂Ω), <sup>G</sup> - <sup>i</sup> }, i = 0, 1, which serve as the counterparts of {H<sup>s</sup>(∂Ω), L<sup>2</sup>(∂Ω), H<sup>s</sup>(∂Ω)}, <sup>s</sup> = 1/2, <sup>3</sup>/2. Now one can prove the variant of Theorem 8.3.9 and Theorem 8.3.10 alluded to above.

**Theorem 8.7.5.** Assume that <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded Lipschitz domain. Then the Dirichlet and Neumann trace operators in (8.7.2) admit unique extensions to continuous surjective operators

$$
\widetilde{\tau}\_{\mathbf{D}} : \operatorname{dom} T\_{\max} \to \mathcal{G}'\_1 \quad \text{and} \quad \widetilde{\tau}\_{\mathbf{N}} : \operatorname{dom} T\_{\max} \to \mathcal{G}'\_0,
$$

where dom Tmax is equipped with the graph norm. Furthermore,

ker <sup>τ</sup><sup>D</sup> = ker <sup>τ</sup><sup>D</sup> = dom <sup>A</sup><sup>D</sup> and ker <sup>τ</sup><sup>N</sup> = ker <sup>τ</sup><sup>N</sup> = dom <sup>A</sup>N.

By analogy to Corollary 8.3.11, the second Green identity extends to elements f ∈ dom Tmax and g ∈ dom A<sup>D</sup> in the form

$$(T\_{\max}f,g)\_{L^2(\Omega)} - (f,T\_{\max}g)\_{L^2(\Omega)} = \langle \widetilde{\tau}\_{\mathcal{D}}f, \tau\_{\mathcal{N}}g \rangle\_{\mathcal{H}' \times \mathcal{H}\_1},$$

and for f ∈ dom Tmax and g ∈ dom A<sup>N</sup> the second Green identity reads

$$(T\_{\max}f,g)\_{L^2(\Omega)} - (f,T\_{\max}g)\_{L^2(\Omega)} = -\langle \tilde{\tau}\_{\mathcal{N}}f, \tau\_{\mathcal{D}}g \rangle\_{\mathcal{Y}\_0' \times \mathcal{Y}\_0}.$$

It will also be used that for λ ∈ ρ(AD) the Dirichlet-to-Neumann map in (8.7.3) admits an extension to a bounded operator

$$
\bar{D}(\lambda) : \mathcal{G}'\_1 \to \mathcal{G}'\_0, \qquad \tilde{\tau}\_{\mathcal{D}} f\_{\lambda} \mapsto \tilde{\tau}\_{\mathcal{N}} f\_{\lambda}, \tag{8.7.6}
$$

where f<sup>λ</sup> ∈ Nλ(Tmax).

With the preparations above one can now follow the strategy in Section 8.4 and construct a boundary triplet for the maximal operator Tmax under the assumption that Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> is a bounded Lipschitz domain. Consider the Gelfand triple {G1, L2(∂Ω), <sup>G</sup> - <sup>1</sup>} and the corresponding isometric isomorphisms <sup>ι</sup><sup>+</sup> : <sup>G</sup><sup>1</sup> <sup>→</sup> <sup>L</sup>2(∂Ω) and ι<sup>−</sup> : G - <sup>1</sup> <sup>→</sup> <sup>L</sup>2(∂Ω) such that

$$\langle \varphi, \psi \rangle\_{\mathcal{H}'\_1 \times \mathcal{H}\_1} = (\iota\_- \varphi, \iota\_+ \psi)\_{L^2(\partial \Omega)}, \qquad \varphi \in \mathcal{G}'\_1, \psi \in \mathcal{G}\_1;$$

cf. Lemma 8.1.2. When comparing (8.1.6) and (8.7.5) it is clear that ι<sup>+</sup> = Λ−1/<sup>2</sup> and <sup>ι</sup><sup>−</sup> is the extension of Λ1/<sup>2</sup> onto <sup>G</sup> - <sup>1</sup>. Recall also the definition and the properties of the Dirichlet operator A<sup>D</sup> in Theorem 8.7.2 and the direct sum decomposition (8.4.1).

**Theorem 8.7.6.** Let <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>n</sup> be a bounded Lipschitz domain and let <sup>A</sup><sup>D</sup> be the selfadjoint Dirichlet realization of <sup>−</sup>Δ + <sup>V</sup> in <sup>L</sup>2(Ω) in Theorem 8.7.2. Fix a number <sup>η</sup> <sup>∈</sup> <sup>ρ</sup>(AD) <sup>∩</sup> <sup>R</sup> and decompose <sup>f</sup> <sup>∈</sup> dom <sup>T</sup>max according to (8.4.1) in the form <sup>f</sup> <sup>=</sup> <sup>f</sup><sup>D</sup> <sup>+</sup> <sup>f</sup>η, where <sup>f</sup><sup>D</sup> <sup>∈</sup> dom <sup>A</sup><sup>D</sup> and <sup>f</sup><sup>η</sup> <sup>∈</sup> <sup>N</sup>η(Tmax). Then {L<sup>2</sup>(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1}, where

$$
\Gamma\_0 f = \iota\_- \widetilde{\tau}\_{\mathcal{D}} f \quad \text{and} \quad \Gamma\_1 f = -\iota\_+ \tau\_{\mathcal{N}} f\_{\mathcal{D}}, \qquad f = f\_{\mathcal{D}} + f\_{\eta} \in \text{dom}\, T\_{\text{max}},
$$

is a boundary triplet for (Tmin)<sup>∗</sup> = Tmax such that

$$A\_0 = A\_\mathcal{D} \qquad \text{and} \qquad A\_1 = T\_{\min} \widehat{+} \mathfrak{N}\_\eta(T\_{\max}).$$

The γ-field and Weyl function corresponding to the boundary triplet in Theorem 8.7.6 are formally the same as in Proposition 8.4.4. In fact, if fη(ϕ) denotes the unique element in Nη(Tmax) such that Γ0fη(ϕ) = ϕ, then for all λ ∈ ρ(AD) the γ-field is given by

$$
\gamma(\lambda)\varphi = \left(I + (\lambda - \eta)(A\_{\mathcal{D}} - \lambda)^{-1}\right)f\_{\eta}(\varphi), \quad \varphi \in L^2(\partial\Omega),
$$

where fλ(ϕ) := γ(λ)ϕ is the unique element in Nλ(Tmax) such that Γ0fλ(ϕ) = ϕ. As in Proposition 8.4.4 one also has

$$
\gamma(\lambda)^\* = -\iota\_+ \tau\_\mathcal{N} (A\_\mathcal{D} - \overline{\lambda})^{-1}, \quad \lambda \in \rho(A\_\mathcal{D}).
$$

Moreover, the Weyl function M is given by

$$M(\lambda)\varphi = (\eta - \lambda)\iota\_+\tau\_\mathcal{N}(A\_\mathcal{D} - \lambda)^{-1}f\_\eta(\varphi), \quad \varphi \in L^2(\partial\Omega).$$

As in the case of bounded C2-domains, the Weyl function can be expressed via the Dirichlet-to-Neumann map; here the extended mapping <sup>D</sup>(λ) in (8.7.6) is used. In the same way as in Lemma 8.4.5 one verifies the relation

$$M(\lambda) = \iota\_+ \left( \tilde{D}(\eta) - \tilde{D}(\lambda) \right) \iota\_-^{-1}.$$

With the boundary triplet {L2(∂Ω), <sup>Γ</sup>0, <sup>Γ</sup>1} in Theorem 8.7.6 and the corresponding γ-field and Weyl function the self-adjoint realizations of −Δ + V on a bounded Lipschitz domain Ω <sup>⊂</sup> <sup>R</sup><sup>n</sup> can be parametrized and the spectral properties can be described in a similar form as in Section 8.4. The discussion of the semibounded extensions and of the corresponding sesquilinear forms with the help of a compatible boundary pair is parallel to the considerations in Section 8.5 and is not provided here. Finally, the coupling technique of Schr¨odinger operators from Section 8.6 also extends under appropriate modifications to the general situation of Lipschitz domains.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Appendix A**

## **Integral Representations of Nevanlinna Functions**

Operator-valued Nevanlinna functions and their integral representations are presented in this appendix. First the case of scalar Nevanlinna functions is considered. Then follows a short introduction to operator-valued integrals; by interpreting these integrals as improper integrals the methods are kept as simple as possible. The general operator-valued Nevanlinna functions are treated based on the previous notions. Special operator-valued Nevanlinna functions such as Kac functions, Stieltjes functions, and inverse Stieltjes functions are discussed in detail.

## **A.1 Borel transforms and their Stieltjes inversion**

This preparatory section contains a brief discussion of the Stieltjes inversion formula for the Borel transform. The form of the transform and the conditions have been chosen so that the results are easy to apply. In particular, with the inversion formula one can prove a weak form of the Stone inversion formula, a useful denseness property, and a general form of the Stieltjes inversion formula for Nevanlinna functions.

Let <sup>τ</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> be a nondecreasing function and let <sup>g</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a measurable function such that

$$\int\_{\mathbb{R}} \frac{|g(t)|}{|t|+1} \, d\tau(t) < \infty. \tag{A.1.1}$$

The Borel transform <sup>G</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> <sup>C</sup> of the combination g dτ is defined by

$$G(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, g(t) \, d\tau(t), \quad \lambda \in \mathbb{C} \, \backslash \, \mathbb{R}. \tag{A.1.2}$$

Observe that the Borel transform G of g dτ in (A.1.2) is well defined and holomorphic on <sup>C</sup> \ <sup>R</sup>. The following result is the Stieltjes inversion formula for the Borel transform in (A.1.1).

**Proposition A.1.1.** Let <sup>τ</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> be a nondecreasing function, let <sup>g</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a measurable function which satisfies (A.1.1), and let G be the Borel transform of g dτ . Then

$$\begin{aligned} \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_a^b \left( G(s + i\varepsilon) - G(s - i\varepsilon) \right) ds \\ &= \frac{1}{2} \int\_{\{a\}} g(t) \, d\tau(t) + \int\_{a+}^{b-} g(t) \, d\tau(t) + \frac{1}{2} \int\_{\{b\}} g(t) \, d\tau(t) \end{aligned}$$

holds for each compact interval [a, b] <sup>⊂</sup> <sup>R</sup>. Furthermore, if <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> dτ (R), then for 0 <ε< 1:

$$\frac{1}{2\pi} \left| \int\_{a}^{b} \left( G(s + i\varepsilon) - G(s - i\varepsilon) \right) ds \right| \le \int\_{\mathbb{R}} |g(t)| \, d\tau(t). \tag{A.1.3}$$

Proof. For ε > 0 and <sup>s</sup> <sup>∈</sup> <sup>R</sup> one has

$$\frac{G(s+i\varepsilon)-G(s-i\varepsilon)}{2\pi i} = \frac{1}{\pi} \int\_{\mathbb{R}} \frac{\varepsilon}{(s-t)^2 + \varepsilon^2} \, g(t) \, d\tau(t).$$

Integration of the left-hand side over the interval [a, b] and Fubini's theorem lead to

$$\begin{split} \frac{1}{2\pi i} \int\_{a}^{b} \left( G(s + i\varepsilon) - G(s - i\varepsilon) \right) ds \\ &= \frac{1}{\pi} \int\_{\mathbb{R}} \left( \int\_{a}^{b} \frac{\varepsilon}{(s - t)^{2} + \varepsilon^{2}} \, ds \right) g(t) \, d\tau(t) \\ &= \frac{1}{\pi} \int\_{\mathbb{R}} \left( \arctan \left( \frac{b - t}{\varepsilon} \right) - \arctan \left( \frac{a - t}{\varepsilon} \right) \right) g(t) \, d\tau(t). \end{split} \tag{A.1.4}$$

In order to justify the use of Fubini's theorem in (A.1.4) note first that the functions

$$
\arctan\left(\frac{b-t}{\varepsilon}\right) - \arctan\left(\frac{a-t}{\varepsilon}\right), \qquad 0 < \varepsilon < 1,\tag{A.1.5}
$$

are nonnegative and bounded on R. Furthermore, observe that for 0 <ε< 1 and x > 0 one has arctan x < arctan x/ε < π/2 and hence the functions in (A.1.5) have the upper bound

$$k(t) = \begin{cases} \frac{\pi}{2} - \arctan(a - t), & t \le a, \\ \pi, & a < t < b, \\ \arctan(b - t) + \frac{\pi}{2}, & t \ge b. \end{cases}$$

Since

$$\begin{aligned} t\left(\frac{\pi}{2} - \arctan(a - t)\right) &\to -1, \quad t \to -\infty, \\ t\left(\arctan(b - t) + \frac{\pi}{2}\right) &\to 1, \quad t \to \infty, \end{aligned}$$

one has kg <sup>∈</sup> <sup>L</sup><sup>1</sup> dτ (R) by (A.1.1), and it follows that the integral on the right-hand side in (A.1.4) is finite. Thus, the interchange of integration in (A.1.4) is justified.

Now the dominated convergence theorem will be applied to (A.1.4). For ε ↓ 0 one has

$$
\arctan\left(\frac{b-t}{\varepsilon}\right) - \arctan\left(\frac{a-t}{\varepsilon}\right) \to \begin{cases} 0, & t < a, \\ \pi/2, & t = a, \\ \pi, & a < t < b, \\ \pi/2, & t = b, \\ 0, & t > b, \end{cases}
$$

and as an integrable majorant one can use k|g|. The right-hand side of (A.1.4) then shows that

$$\begin{aligned} &\lim\_{\varepsilon \downarrow 0} \frac{1}{\pi} \int\_{\mathbb{R}} \left( \arctan \left( \frac{b-t}{\varepsilon} \right) - \arctan \left( \frac{a-t}{\varepsilon} \right) \right) g(t) \, d\tau(t) \\ & \qquad = \frac{1}{2} \int\_{\{a\}} g(t) \, d\tau(t) + \int\_{a+}^{b-} g(t) \, d\tau(t) + \frac{1}{2} \int\_{\{b\}} g(t) \, d\tau(t), \end{aligned}$$

which leads to the assertion.

Finally, assume that <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> dτ (R) and 0 <ε< 1. Then the estimate (A.1.3) follows due to (A.1.4) and (A.1.5). -

Observe that when the function g in (A.1.1) is real, then the function G in (A.1.2) satisfies the symmetry property G(λ) = G(λ), in which case one has that

$$\frac{1}{2\pi i} \int\_{a}^{b} \left( G(s + i\varepsilon) - G(s - i\varepsilon) \right) ds = \frac{1}{\pi} \int\_{a}^{b} \text{Im} \, G(s + i\varepsilon) \, ds.$$

The special case g(t) = 1 in Proposition A.1.1 is of particular interest; see for instance Chapter 3.

**Corollary A.1.2.** Let <sup>τ</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> be a nondecreasing function which satisfies the integrability condition

$$\int\_{\mathbb{R}} \frac{1}{|t|+1} \, d\tau(t) < \infty \tag{A.1.6}$$

and let G be given by

$$G(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\tau(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Then the inversion formula

$$\lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{a}^{b} \left( G(s + i\varepsilon) - G(s - i\varepsilon) \right) ds = \frac{\tau(b+) + \tau(b-)}{2} - \frac{\tau(a+) + \tau(a-)}{2} \tag{A.1.7}$$

holds for every compact interval [a, b] <sup>⊂</sup> <sup>R</sup>. In particular, if <sup>τ</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> is a nondecreasing function which is bounded, then (A.1.6) is satisfied and (A.1.7) holds.

The Stieltjes inversion result in Proposition A.1.1 and Corollary A.1.2 has a number of interesting consequences. A first observation concerns functions of bounded variation. Recall that any function of bounded variation on R is a linear combination of four bounded nondecreasing functions. Hence, the following corollary is straightforward.

**Corollary A.1.3.** Let <sup>τ</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a function of bounded variation. Then the function

$$H(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\tau(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

is well defined and holomorphic, and for each compact interval [a, b] <sup>⊂</sup> <sup>R</sup> one has

$$\lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{a}^{b} \left( H(s + i\varepsilon) - H(s - i\varepsilon) \right) ds = \frac{\tau(b+) + \tau(b-)}{2} - \frac{\tau(a+) + \tau(a-)}{2}.$$

Proposition A.1.1 and Corollary A.1.2 can also be used to compute the spectral projection of a self-adjoint relation via Stone's formula; see Chapter 1.

**Example A.1.4.** Let H be a self-adjoint relation in a Hilbert space H and let E(·) be the corresponding spectral measure. For f ∈ H consider the function

$$G(\lambda) = \left( (H - \lambda)^{-1} f, f \right), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

It is clear that <sup>τ</sup> (t)=(E(−∞, t)f,f), <sup>t</sup> <sup>∈</sup> <sup>R</sup>, is a bounded nondecreasing function and that

$$G(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\tau(t).$$

By Proposition A.1.1 with <sup>g</sup>(t) = 1, <sup>t</sup> <sup>∈</sup> <sup>R</sup>, one has for [a, b] <sup>⊂</sup> <sup>R</sup>

$$\begin{aligned} \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_a^b \left( \left( (H - (s + i\varepsilon))^{-1} - (H - (s - i\varepsilon))^{-1} \right) f, f \right) ds \\ &= \frac{1}{2} (E(\{a\})f, f) + (E((a, b))f, f) + \frac{1}{2} (E(\{b\})f, f), \end{aligned}$$

which is Stone's formula in the weak sense; cf. (1.5.4) and (1.5.7) in Section 1.5.

Another consequence is the Stieltjes inversion formula for Nevanlinna functions; see Lemma A.2.7 and Corollary A.2.8. Moreover, there is the following denseness statement for a space of the form L<sup>2</sup> dσ(R) which is used in Section 4.3.

**Corollary A.1.5.** Assume that the function <sup>σ</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> is nondecreasing and that it satisfies

$$\int\_{\mathbb{R}} \frac{1}{t^2 + 1} \, d\sigma(t) < \infty. \tag{A.1.8}$$

Let g be an element in L<sup>2</sup> dσ(R) such that

$$\int\_{\mathbb{R}} \frac{1}{t - \lambda} \, g(t) \, d\sigma(t) = 0, \qquad \lambda \in \mathbb{C} \, \backslash \mathbb{R}. \tag{A.1.9}$$

Then g = 0 in L<sup>2</sup> dσ(R).

Proof. The conditions (A.1.8) and <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dσ(R) show, using the Cauchy–Schwarz inequality, that

$$\int\_{\mathbb{R}} \frac{|g(t)|}{\sqrt{t^2+1}} \, d\sigma(t) \le \left( \int\_{\mathbb{R}} |g(t)|^2 \, d\sigma(t) \right) \left( \int\_{\mathbb{R}} \frac{1}{t^2+1} \, d\sigma(t) \right) < \infty,$$

which implies that the condition (A.1.1) is satisfied. It follows from (A.1.9) and Proposition A.1.1 that for each compact interval [a, b] <sup>⊂</sup> <sup>R</sup> one has

$$\frac{1}{2} \int\_{\{a\}} g(t) \, d\sigma(t) + \int\_{a+}^{b-} g(t) \, d\sigma(t) + \frac{1}{2} \int\_{\{b\}} g(t) \, d\sigma(t) = 0. \tag{A.1.10}$$

Next it will be shown that the contribution of the endpoints a and b in (A.1.10) is trivial. To see this, suppose that a is a point mass of dσ and choose η<sup>k</sup> > 0, k = 1, 2,... , such that η<sup>k</sup> → 0, k → ∞, and a ± η<sup>k</sup> are not point masses of dσ, which is possible since the point masses of dσ form a countable subset of R. Now Proposition A.1.1 for the compact interval [<sup>a</sup> <sup>−</sup> <sup>η</sup>k, a <sup>+</sup> <sup>η</sup>k] <sup>⊂</sup> <sup>R</sup>, (A.1.9), and dominated convergence lead to

$$0 = \int\_{a-\eta\_k}^{a+\eta\_k} g(t) \, d\sigma(t) = \lim\_{k \to \infty} \int\_{a-\eta\_k}^{a+\eta\_k} g(t) \, d\sigma(t) = \int\_{\{a\}} g(t) \, d\sigma(t).$$

The same argument shows that the contribution of the endpoint b in (A.1.10) is trivial, and hence (A.1.10) reduces to

$$\int\_{a}^{b} g(t) \, d\sigma(t) = 0 \quad \text{for all } a < b.$$

In other words, g is orthogonal to all characteristic functions in L<sup>2</sup> dσ(R) and hence g = 0 in L<sup>2</sup> dσ(R). -

**Remark A.1.6.** Sometimes it is useful to have a matrix-valued version of the Borel transform in (A.1.2) and of the Stieltjes inversion formula in Proposition A.1.1; see also Remark A.2.10. More precisely, if g is a measurable n×n matrix function on R such that the entries of g satisfy the integrability condition (A.1.1), then Proposition A.1.1 remains valid with the integrals interpreted in the matrix sense.

## **A.2 Scalar Nevanlinna functions**

This section contains a brief treatment of the integral representation of scalar Nevanlinna functions and its consequences.

**Lemma A.2.1.** Let <sup>f</sup> : <sup>D</sup> <sup>→</sup> <sup>C</sup> be a holomorphic function and let <sup>r</sup> <sup>∈</sup> (0, 1). Then the representation

$$f(z) = i \text{Im} \, f(0) + \frac{1}{2\pi} \int\_0^{2\pi} \frac{re^{it} + z}{re^{it} - z} \text{Re} \, f(re^{it}) \, dt$$

holds for all <sup>z</sup> <sup>∈</sup> <sup>D</sup> with <sup>|</sup>z<sup>|</sup> < r.

Proof. Let <sup>r</sup> <sup>∈</sup> (0, 1) and <sup>T</sup><sup>r</sup> <sup>=</sup> {<sup>z</sup> <sup>∈</sup> <sup>C</sup> : <sup>|</sup>z<sup>|</sup> <sup>=</sup> <sup>r</sup>}. Then for <sup>z</sup> <sup>∈</sup> <sup>D</sup>, <sup>|</sup>z<sup>|</sup> < r, one obtains by Cauchy's integral formula

$$f(z) = \frac{1}{2\pi i} \int\_{\mathbb{T}\_r} \frac{f(w)}{w - z} \, dw = \frac{1}{2\pi} \int\_0^{2\pi} \frac{re^{it}}{re^{it} - z} f(re^{it}) \, dt,\tag{A.2.1}$$

and, since <sup>|</sup>r2/z<sup>|</sup> > r, one obtains in a similar way

$$f(0) = \frac{1}{2\pi i} \int\_{\mathbb{T}\_r} \frac{f(w)}{w(1 - w(\overline{z}/r^2))} \, dw = \frac{1}{2\pi} \int\_0^{2\pi} \frac{re^{-it}}{re^{-it} - \overline{z}} f(re^{it}) \, dt,$$

and, by taking complex conjugates,

$$\overline{f(0)} = \frac{1}{2\pi} \int\_0^{2\pi} \frac{re^{it}}{re^{it} - z} \,\overline{f(re^{it})} \, dt.$$

Furthermore, it is clear from (A.2.1) that

$$\operatorname{Re} f(0) = \frac{1}{2\pi} \int\_0^{2\pi} \operatorname{Re} f(re^{it}) \, dt.$$

Therefore,

$$\begin{split} \frac{1}{2\pi} \int\_{0}^{2\pi} \frac{re^{it} + z}{re^{it} - z} \operatorname{Re} f(re^{it}) \, dt \\ &= \frac{1}{2\pi} \int\_{0}^{2\pi} \left( \frac{2re^{it}}{re^{it} - z} - 1 \right) \frac{f(re^{it}) + \overline{f(re^{it})}}{2} \, dt \\ &= \frac{1}{2\pi} \int\_{0}^{2\pi} \frac{re^{it}}{re^{it} - z} \, f(re^{it}) \, dt + \frac{1}{2\pi} \int\_{0}^{2\pi} \frac{re^{it}}{re^{it} - z} \, \overline{f(re^{it})} \, dt - \operatorname{Re} f(0) \\ &= f(z) + \overline{f(0)} - \operatorname{Re} f(0) = f(z) - i \operatorname{Im} f(0), \end{split}$$

which implies the assertion of the lemma. -

If <sup>f</sup> : <sup>D</sup> <sup>→</sup> <sup>C</sup> is a holomorphic function such that Re <sup>f</sup>(reit) <sup>≥</sup> 0, then the expression

$$\operatorname{Re} f(re^{it}) \, dt/2\pi$$

in Lemma A.2.1 leads to a measure. This observation is important in the proof of the next lemma.

**Lemma A.2.2.** Let <sup>f</sup> : <sup>D</sup> <sup>→</sup> <sup>C</sup> be a function. Then the following statements are equivalent:

(i) f has an integral representation of the form

$$f(z) = ic + \int\_0^{2\pi} \frac{e^{it} + z}{e^{it} - z} \, d\tau(t), \qquad z \in \mathbb{D},$$

with <sup>c</sup> <sup>∈</sup> <sup>R</sup> and a bounded nondecreasing function <sup>τ</sup> : [0, <sup>2</sup>π] <sup>→</sup> <sup>R</sup>.

(ii) <sup>f</sup> is holomorphic on <sup>D</sup> and Re <sup>f</sup>(z) <sup>≥</sup> <sup>0</sup> for all <sup>z</sup> <sup>∈</sup> <sup>D</sup>.

Proof. (i) <sup>⇒</sup> (ii) It is clear that <sup>f</sup> is holomorphic on <sup>D</sup>. For <sup>z</sup> <sup>∈</sup> <sup>D</sup> a straightforward calculation shows that

$$\operatorname{Re} f(z) = \int\_0^{2\pi} \frac{1 - |z|^2}{|e^{it} - z|^2} \, d\tau(t) \ge 0.$$

(ii) <sup>⇒</sup> (i) Since Re <sup>f</sup>(z) <sup>≥</sup> 0, <sup>z</sup> <sup>∈</sup> <sup>D</sup>, it is obvious that for any <sup>r</sup> <sup>∈</sup> (0, 1) the function

$$\tau\_r : [0, 2\pi] \to \mathbb{R}, \qquad t \mapsto \frac{1}{2\pi} \int\_0^t \text{Re}\, f(re^{is}) \, ds,$$

is nondecreasing. Furthermore, Cauchy's integral formula shows that

$$\tau\_r(2\pi) = \frac{1}{2\pi} \int\_0^{2\pi} \text{Re}\, f(re^{is}) \, ds = \text{Re}\, \left(\frac{1}{2\pi i} \int\_{\mathbb{T}\_r} \frac{f(w)}{w} \, dw\right) = \text{Re}\, f(0),$$

and so

$$0 = \tau\_r(0) \le \tau\_r(t) \le \tau\_r(2\pi) = \text{Re}\, f(0) < \infty \tag{A.2.2}$$

for t ∈ (0, 2π) and r ∈ (0, 1). Therefore, the Borel measure induced by τ<sup>r</sup> on [0, 2π] is finite and since Re f is continuous, it follows that τr, r ∈ (0, 1), is a regular Borel measure on [0, 2π]. Observe that, by Lemma A.2.1,

$$\begin{split} f(z) &= i \text{Im} \, f(0) + \frac{1}{2\pi} \int\_0^{2\pi} \frac{re^{it} + z}{re^{it} - z} \operatorname{Re} \, f(re^{it}) \, dt \\ &= i \text{Im} \, f(0) + \int\_0^{2\pi} \frac{re^{it} + z}{re^{it} - z} \, d\tau\_r(t) \end{split} \tag{A.2.3}$$

for all <sup>z</sup> <sup>∈</sup> <sup>D</sup>, <sup>|</sup>z<sup>|</sup> < r. Next it will be verified that the above formula remains valid when r tends to 1.

By the Helly selection principle (cf. [763, Theorem 16.2]) and (A.2.2), there exists a nonnegative nondecreasing sequence (rk), k = 1, 2,... , tending to 1 and a function τ such that τ<sup>r</sup><sup>k</sup> (t) → τ (t), 0 ≤ t ≤ 2π. Moreover, according to the Helly–Bray theorem (cf. [763, Theorem 16.4]), one has

$$\lim\_{k \to \infty} \int\_0^{2\pi} h(t) \, d\tau\_{r\_k}(t) = \int\_0^{2\pi} h(t) \, d\tau(t)$$

for all continuous functions <sup>h</sup> : [0, <sup>2</sup>π] <sup>→</sup> <sup>C</sup>. Observe that, by (A.2.2), in particular

$$
\tau(2\pi) - \tau(0) = \text{Re}\, f(0).
$$

Therefore, using the Helly–Bray theorem and the fact that <sup>t</sup> → <sup>r</sup>keit+<sup>z</sup> <sup>r</sup>keit−<sup>z</sup> converges uniformly to <sup>t</sup> → <sup>e</sup>it+<sup>z</sup> <sup>e</sup>it−<sup>z</sup> , one finds that

$$\begin{split} &\lim\_{k\to\infty} \int\_{0}^{2\pi} \frac{r\_{k}e^{it}+z}{r\_{k}e^{it}-z} \, d\tau\_{k}(t) \\ & \qquad = \int\_{0}^{2\pi} \frac{e^{it}+z}{e^{it}-z} \, d\tau(t) + \lim\_{k\to\infty} \int\_{0}^{2\pi} \left(\frac{r\_{k}e^{it}+z}{r\_{k}e^{it}-z} - \frac{e^{it}+z}{e^{it}-z}\right) \, d\tau\_{k}(t) \\ & \qquad = \int\_{0}^{2\pi} \frac{e^{it}+z}{e^{it}-z} \, d\tau(t). \end{split}$$

Hence, (A.2.3) yields

$$f(z) = ic + \int\_0^{2\pi} \frac{e^{it} + z}{e^{it} - z} \, d\tau(t) \quad \text{and} \quad c = \text{Im} \, f(0),$$

as needed. -

Here is the definition of a scalar Nevanlinna function. The operator-valued version will be considered in Definition A.4.1.

**Definition A.2.3.** A function <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> <sup>C</sup> is called a Nevanlinna function if


The next result provides an integral representation for Nevanlinna functions.

**Theorem A.2.4.** Let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> <sup>C</sup>. Then the following statements are equivalent:

(i) F has an integral representation of the form

$$F(\lambda) = \alpha + \beta \lambda + \int\_{\mathbb{R}} \frac{1 + t\lambda}{t - \lambda} \, d\theta(t), \quad \lambda \in \mathbb{C} \,\,\bigvee \mathbb{R}, \tag{A.2.4}$$

with <sup>α</sup> <sup>∈</sup> <sup>R</sup>, <sup>β</sup> <sup>≥</sup> <sup>0</sup>, and a bounded nondecreasing function <sup>θ</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup>.

(ii) F is a Nevanlinna function.

$$\perp$$

Proof. (i) ⇒ (ii) It is clear from the representation (A.2.4) that F is holomorphic on <sup>C</sup> \ <sup>R</sup> and that <sup>F</sup>(λ) = <sup>F</sup>(λ). Moreover it follows that

$$\frac{\operatorname{Im} F(\lambda)}{\operatorname{Im} \lambda} = \beta + \int\_{\mathbb{R}} \frac{t^2 + 1}{|t - \lambda|^2} \, d\theta(t), \quad \lambda \in \mathbb{C} \,\,\mathbb{R}.$$

Hence, F is a Nevanlinna function.

(ii) ⇒ (i) Assume that F is a Nevanlinna function and consider the following transformations

$$
\lambda = i \frac{1+z}{1-z}, \quad z \in \mathbb{D}, \qquad F(\lambda) = if(z).
$$

Note that <sup>z</sup> → <sup>λ</sup> is a bijective mapping from <sup>D</sup> onto <sup>C</sup><sup>+</sup> and that the function <sup>f</sup> is holomorphic on D. Furthermore, observe that

$$\operatorname{Im} F(\lambda) \ge 0 \quad \Rightarrow \quad \operatorname{Re} f(z) \ge 0.$$

Hence, according to Lemma A.2.2, there exist <sup>c</sup> <sup>∈</sup> <sup>R</sup> and a bounded nondecreasing function <sup>τ</sup> : [0, <sup>2</sup>π] <sup>→</sup> <sup>R</sup>, such that the function <sup>f</sup> has the representation

$$\begin{split} f(z) &= ic + \int\_0^{2\pi} \frac{e^{is} + z}{e^{is} - z} \, d\tau(s) \\ &= ic + \int\_{0+}^{2\pi-} \frac{e^{is} + z}{e^{is} - z} \, d\tau(s) \\ &\quad + \frac{1+z}{1-z} \Big( \tau(2\pi) - \tau(2\pi -) + \tau(0+) - \tau(0) \Big) \\ &= ic + \beta \frac{1+z}{1-z} + \int\_{0+}^{2\pi-} \frac{e^{is} + z}{e^{is} - z} \, d\tau(s), \end{split}$$

where β = τ (2π) − τ (2π−) + τ (0+) − τ (0) ≥ 0. Thus, the function F has the integral representation

$$F(\lambda) = -c + \beta \lambda + i \int\_{0+}^{2\pi-} \frac{e^{is} + z}{e^{is} - z} \, d\tau(s).$$

Since z = (λ − i)/(λ + i), one sees that

$$F(\lambda) = -c + \beta \lambda + \int\_{0+}^{2\pi-} \frac{\lambda \cot s/2 - 1}{\cot s/2 + \lambda} \, d\tau(s).$$

With the substitutions α = −c, − cot s/2 = t, and the function θ defined by τ (s) = θ(t), one finds that

$$F(\lambda) = \alpha + \beta \lambda + \int\_{\mathbb{R}} \frac{1 + t\lambda}{t - \lambda} \, d\theta(t).$$

Note that the function <sup>θ</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> is bounded since the function <sup>τ</sup> : [0, <sup>2</sup>π] <sup>→</sup> <sup>R</sup> is bounded. -

There is an equivalent formulation of this theorem involving the possibly unbounded measure dσ(t)=(t <sup>2</sup> + 1)dθ(t) defined by

$$
\sigma(t) = \int\_0^t (s^2 + 1) d\theta(s),
$$

which is equivalent to

$$
\theta(b) - \theta(a) = \int\_a^b \frac{d\sigma(t)}{t^2 + 1}
$$

for every compact interval [a, b] <sup>⊂</sup> <sup>R</sup>. Hence, one obtains the following variant of Theorem A.2.4.

**Theorem A.2.5.** Let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> <sup>C</sup>. Then the following statements are equivalent:

(i) F has an integral representation of the form

$$F(\lambda) = \alpha + \beta \lambda + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) d\sigma(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \tag{A.2.5}$$

with <sup>α</sup> <sup>∈</sup> <sup>R</sup>, <sup>β</sup> <sup>≥</sup> <sup>0</sup>, and a nondecreasing function <sup>σ</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> such that

$$\int\_{\mathbb{R}} \frac{d\sigma(t)}{t^2 + 1} < \infty.$$

(ii) F is a Nevanlinna function.

It follows from the integral representation (A.2.5) that the imaginary part of the function F satisfies

$$\frac{\operatorname{Im} F(\lambda)}{\operatorname{Im} \lambda} = \beta + \int\_{\mathbb{R}} \frac{1}{|t - \lambda|^2} \, d\sigma(t), \quad \lambda \in \mathbb{C} \,\,\bigvee \mathbb{R}. \tag{A.2.6}$$

With (A.2.5) and (A.2.6) it is possible to recover the ingredients in the integral formula (A.2.5) directly in terms of the function F. These results are used in Chapter 3.

**Lemma A.2.6.** Let F be a Nevanlinna function as in Theorem A.2.5. Then

$$\alpha = \operatorname{Re} F(i) \quad \text{and} \quad \beta = \lim\_{y \to \infty} \frac{F(iy)}{iy} = \lim\_{y \to \infty} \frac{\operatorname{Im} F(iy)}{y}. \tag{A.2.7}$$

Moreover, for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>

$$\lim\_{y \downarrow 0} y \operatorname{Im} F(x+iy) = \sigma(x+) - \sigma(x-) \tag{A.2.8}$$

and

$$\lim\_{y \downarrow 0} y \operatorname{Re} F(x+iy) = 0. \tag{A.2.9}$$

Proof. The statement concerning α in (A.2.7) is clear. It follows from (A.2.5) that

$$\frac{1}{iy}F(iy) = \frac{1}{iy}\alpha + \beta + \int\_{\mathbb{R}} \frac{1}{iy} \frac{1+iyt}{(t-iy)} \frac{d\sigma(t)}{t^2+1}.\tag{A.2.10}$$

An application of the dominated convergence theorem shows the first identity for β in (A.2.7). The second identity in (A.2.7) follows from (A.2.6). The integral representation (A.2.6) also shows that

$$
\beta y \text{Im} \, F(x+iy) = \beta y^2 + \int\_{\mathbb{R}} \frac{y^2}{(t-x)^2 + y^2} \, d\sigma(t),
$$

and the identity (A.2.8) follows from the dominated convergence theorem. One also sees from (A.2.5) that

$$y \operatorname{Re} F(x+iy) = y \left(\alpha + \beta x\right) + \int\_{\mathbb{R}} \frac{y[(t-x)(1+xt) - y^2t]}{((t-x)^2 + y^2)(t^2+1)} d\sigma(t). \tag{A.2.11}$$

By writing xt = (<sup>t</sup> <sup>−</sup> <sup>x</sup>)<sup>x</sup> <sup>+</sup> <sup>x</sup><sup>2</sup> it follows that the numerator of the integrand is equal to

$$\begin{aligned} y\left[ (t-x)\left(1+(t-x)x+x^2\right) - y^2t \right] \\ &= y\left[ (t-x)\left(1+(t-x)x+x^2-y^2\right) - y^2x \right] \\ &= y(t-x)[1+x^2-y^2] + (t-x)^2xy - y^3x. \end{aligned}$$

Now assume that <sup>|</sup>y| ≤ 1. Then one sees that for a fixed <sup>x</sup> <sup>∈</sup> <sup>R</sup> the integrand in (A.2.11) is dominated by

$$\frac{x^2 + 4|x| + 2}{2(t^2 + 1)}.$$

Thus, the identity in (A.2.9) follows from the dominated convergence theorem. -

In addition to Lemma A.2.6, the following Stieltjes inversion formula helps to recover the essential parts of the function σ.

**Lemma A.2.7.** Let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> <sup>C</sup> be a Nevanlinna function with the integral representation (A.2.5). Let <sup>U</sup> be an open neighborhood in <sup>C</sup> of [a, b] <sup>⊂</sup> <sup>R</sup> and let <sup>g</sup> : <sup>U</sup> <sup>→</sup> <sup>C</sup> be holomorphic. Then

$$\begin{split} &\lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{a}^{b} \left[ (gF)(s + i\varepsilon) - (gF)(s - i\varepsilon) \right] ds \\ &= \frac{1}{2} \int\_{\{a\}} g(t) \, d\sigma(t) + \int\_{a+}^{b-} g(t) \, d\sigma(t) + \frac{1}{2} \int\_{\{b\}} g(t) \, d\sigma(t). \end{split} \tag{A.2.12}$$

For any rectangle R = [A, B] × [−iε0, iε0] ⊂ U with A<a<b<B there exists M ≥ 0 such that, for 0 < ε ≤ ε0,

$$\left| \int\_{a}^{b} \left[ (gF)(s + i\varepsilon) - (gF)(s - i\varepsilon) \right] ds \right| \le M \sup \left\{ |g(\lambda)|, |g'(\lambda)| : \lambda \in R \right\}, \text{ (A.2.13)}$$
  $where \ a' \text{ } taken\text{ } for\text{ } the \text{ } denominator\text{ } of\text{ } a$ 

where gstands for the derivative of g. Proof. Consider the Nevanlinna function F given by

$$F(\lambda) = \alpha + \beta \lambda + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) \, d\sigma(t), \quad \lambda \in \mathbb{C} \,\,\backslash \mathbb{R},$$

where <sup>α</sup> <sup>∈</sup> <sup>R</sup>, <sup>β</sup> <sup>≥</sup> 0, and <sup>σ</sup> is a nondecreasing function satisfying the integrability condition

$$\int\_{\mathbb{R}} \frac{1}{t^2 + 1} \, d\sigma(t);$$

cf. Theorem A.2.5. Choose an interval (A, B) <sup>⊂</sup> <sup>R</sup> such that

$$[a, b] \subset (A, B) \subset [A, B] \subset \mathcal{U}$$

and choose ε<sup>0</sup> > 0 such that R = [A, B] × [−iε0, iε0] ⊂ U. Observe that the choice of A and B leads to the decomposition

$$g(\lambda)F(\lambda) = G(\lambda) + H(\lambda) + g(\lambda)K(\lambda), \quad \lambda \in (\mathbb{C} \cap \mathbb{R}) \cap \mathcal{U},\tag{A.2.14}$$

where the functions G and H are given by

$$G(\lambda) = \int\_A^B \frac{1}{t - \lambda} \, g(t) \, d\sigma(t), \quad H(\lambda) = \int\_A^B \frac{g(\lambda) - g(t)}{t - \lambda} \, d\sigma(t),$$

while the factor K is given by

$$\begin{split} K(\lambda) &= \left[ \alpha + \beta \lambda - \int\_{A}^{B} \frac{t}{t^2 + 1} \, d\sigma(t) \\ &\quad + \left( \int\_{-\infty}^{A} + \int\_{B}^{\infty} \right) \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) \, d\sigma(t) \right]. \end{split}$$

The contributions of the functions G, H, and K will be considered separately.

Denote by <sup>g</sup> the extension of <sup>g</sup> on [A, B] by zero to all of <sup>R</sup>. Then <sup>G</sup> can be written as

$$G(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \tilde{g}(t) \, d\sigma(t), \quad \text{where} \quad \int\_{\mathbb{R}} \frac{|\tilde{g}(t)|}{|t| + 1} \, d\sigma(t) < \infty.$$

Hence, one concludes from Proposition A.1.1 that

$$\begin{split} \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{a}^{b} \left( G(s + i\varepsilon) - G(s - i\varepsilon) \right) ds \\ &= \frac{1}{2} \int\_{\{a\}} \widetilde{g}(t) \, d\sigma(t) + \int\_{a+}^{b-} \widetilde{g}(t) \, d\sigma(t) + \frac{1}{2} \int\_{\{b\}} \widetilde{g}(t) \, d\sigma(t) \\ &= \frac{1}{2} \int\_{\{a\}} g(t) \, d\sigma(t) + \int\_{a+}^{b-} g(t) \, d\sigma(t) + \frac{1}{2} \int\_{\{b\}} g(t) \, d\sigma(t). \end{split} \tag{A.2.15}$$

Moreover, since <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>1</sup> dσ(R), it follows from the estimate (A.1.3) in Proposition A.1.1 that

$$\begin{split} \left| \int\_{a}^{b} \left( G(s + i\varepsilon) - G(s - i\varepsilon) \right) ds \right| &\leq M' \sup \left\{ |g(\lambda)| : \lambda \in [A, B] \right\} \\ &\leq M' \sup \left\{ |g(\lambda)| : \lambda \in R \right\}. \end{split} \tag{A.2.16}$$

The function <sup>H</sup> is defined for <sup>λ</sup> <sup>∈</sup> (<sup>C</sup> \ <sup>R</sup>)∩<sup>U</sup> and can be extended to <sup>U</sup> by setting

$$H(\lambda) = \int\_{A}^{B} h(t, \lambda) \, d\sigma(t), \quad \text{where} \quad h(t, \lambda) = \begin{cases} \frac{g(\lambda) - g(t)}{t - \lambda}, & t \neq \lambda, \\ -g'(t), & t = \lambda. \end{cases} \tag{A.2.17}$$

Clearly, the function H in (A.2.17) is bounded on the rectangle R by

$$L \sup \left\{ \left| g'(\lambda) \right| : \lambda \in R \right\},$$

where L is a constant. Note that for all s ∈ (a, b)

$$H(s+i\varepsilon) - H(s-i\varepsilon) \to 0, \quad \varepsilon \to 0, \varepsilon$$

and hence dominated convergence yields

$$\lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{a}^{b} \left( H(s + i\varepsilon) - H(s - i\varepsilon) \right) ds = 0. \tag{A.2.18}$$

Note that it also follows from the above that there exists M-- ≥ 0 such that for all 0 < ε ≤ ε<sup>0</sup>

$$\left| \int\_{a}^{b} \left( H(s + i\varepsilon) - H(s - i\varepsilon) \right) ds \right| \le M'' \sup \{ |g'(\lambda)| : \lambda \in R \}. \tag{A.2.19}$$

The function <sup>K</sup> has a holomorphic extension to the set (<sup>C</sup> \ <sup>R</sup>) <sup>∪</sup> (A, B) and this extension is uniformly continuous on the rectangle [a, b]×[−iε, iε0]. It is clear that

$$(gK)(s+i\varepsilon) - (gK)(s-i\varepsilon) \to 0, \quad \varepsilon \to 0,$$

holds for all s ∈ (a, b). Since |g| is also bounded on [a, b] × [−iε, iε0], dominated convergence shows that

$$\lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{a}^{b} \left( (gK)(s + i\varepsilon) - (gK)(s - i\varepsilon) \right) ds = 0. \tag{A.2.20}$$

Furthermore, one sees that there exists M---≥ 0 such that, for all 0 < ε ≤ ε0,

$$\begin{split} \left| \int\_{a}^{b} \left( (gK)(s + i\varepsilon) - (gK)(s - i\varepsilon) \right) ds \right| \\ \leq M^{\prime\prime\prime} \sup \left\{ |g(\lambda)| : \lambda \in [a, b] \times [-i\varepsilon\_{0}, i\varepsilon\_{0}] \right\} \\ \leq M^{\prime\prime\prime} \sup \left\{ |g(\lambda)| : \lambda \in R \right\}. \end{split} \tag{A.2.21}$$

Now the assertion (A.2.12) follows from (A.2.14), (A.2.15), (A.2.18), and (A.2.20); in a similar way the assertion (A.2.13) follows from (A.2.14), (A.2.16), (A.2.19), and (A.2.21). -

For the special case g(t) = 1 Lemma A.2.7 has the following form.

**Corollary A.2.8.** Let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> <sup>C</sup> be a Nevanlinna function with the integral representation (A.2.5). Then

$$\lim\_{\varepsilon \downarrow 0} \frac{1}{\pi} \int\_{a}^{b} \operatorname{Im} F(s + i\varepsilon) \, ds = \frac{\sigma(b+) + \sigma(b-)}{2} - \frac{\sigma(a+) + \sigma(a-)}{2}.$$

It may happen that a Nevanlinna function has an analytic continuation to a subinterval of R.

**Proposition A.2.9.** Let F be a Nevanlinna function as in Theorem A.2.5 and let (c, d) <sup>⊂</sup> <sup>R</sup> be an open interval. Then the following statements are equivalent:


In this case

$$F(x) = \alpha + x\beta + \int\_{\mathbb{R}\backslash\langle c,d\rangle} \left(\frac{1}{t-x} - \frac{t}{t^2+1}\right) d\sigma(t), \quad x \in (c,d),\tag{A.2.22}$$

and F is a real nondecreasing function on (c, d).

Proof. (i) ⇒ (ii) By assumption, F(x) is real for every x ∈ (c, d). Now apply Corollary A.2.8 to any compact subinterval of (c, d). This implies that σ is constant on every compact subinterval of (c, d).

(ii) ⇒ (i) This is a direct consequence of Theorem A.2.5, since

$$F(\lambda) = \alpha + \lambda \beta + \int\_{\mathbb{R}\backslash\langle c, d \rangle} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) d\sigma(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

If either (i) or (ii) holds, then (A.2.22) follows by a limit process. In particular, F is real on (c, d). From (A.2.22) one also concludes that for all x ∈ (c, d) the function F is differentiable and

$$F'(x) = \beta + \int\_{\mathbb{R}\backslash(c,d)} \frac{1}{(t-x)^2} \, d\sigma(t), \quad x \in (c,d),$$

is nonnegative. Hence, F is nondecreasing on (c, d). This completes the proof. -

Assume that Im <sup>F</sup>(μ) = 0 for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Then it follows from (A.2.6) that in the integral representation (A.2.5) <sup>β</sup> = 0 and <sup>σ</sup>(t) = 0, <sup>t</sup> <sup>∈</sup> <sup>R</sup>. In other words, Im <sup>F</sup>(λ) = 0 for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>F</sup>(λ) = <sup>α</sup> for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. If <sup>F</sup>

admits an analytic continuation to (c, d) <sup>⊂</sup> <sup>R</sup> one concludes in the same way that F(x) = α for x ∈ (c, d). If F- (x0) = 0 for some x<sup>0</sup> ∈ (c, d), then F- (x) = 0 for all <sup>x</sup> <sup>∈</sup> (c, d) and <sup>F</sup>(λ) = <sup>α</sup> for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> <sup>∪</sup> (c, d). Finally, observe that if Im <sup>F</sup>(μ) = 0 for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, then

$$\operatorname{Im}\left(-\frac{1}{F(\lambda)}\right) = \frac{\operatorname{Im}F(\lambda)}{|F(\lambda)|^2}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

and hence −1/F is also a Nevanlinna function.

**Remark A.2.10.** This remark is a continuation of Remark A.1.6. If the function g in Lemma A.2.7 is a holomorphic n × n matrix function on U, then the results (A.2.12) and (A.2.13) remain valid with the integrals interpreted in the matrix sense; cf. Remark A.1.6. Furthermore, the function g may be defined on K × U with some compact space K such that x → gx(λ) is continuous for all λ ∈ U and λ → gx(λ) is holomorphic for all x ∈ K. In this case (A.2.12) remains valid for all x ∈ K, while the upper bound in (A.2.13) must be replaced by

$$M \sup \left\{ |g\_x(\lambda)|, |g\_x'(\lambda)| : x \in K, \,\lambda \in R \right\}.$$

## **A.3 Operator-valued integrals**

This section is concerned with operator-valued integrals which will be used in the integral representation of operator-valued Nevanlinna functions. For this purpose the notion of an improper Riemann–Stieltjes integral of bounded continuous functions is carried over to the case of operator-valued distribution functions.

In order to treat the Riemann–Stieltjes integral in the operator-valued case one needs the following preparatory observations. A function Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) whose values are self-adjoint operators is said to be nondecreasing if t<sup>1</sup> ≤ t<sup>2</sup> implies

$$(\Theta(t\_1)\varphi, \varphi) \le (\Theta(t\_2)\varphi, \varphi), \quad \varphi \in \mathcal{G}.$$

In general, such a function has limits as t → ±∞, that are self-adjoint relations; cf. Chapter 5. In the following Θ will be called a self-adjoint nondecreasing operator function. Furthermore, Θ will be called uniformly bounded if there exists M such that for all <sup>t</sup> <sup>∈</sup> <sup>R</sup>

$$|(\Theta(t)\varphi,\varphi)| \le M\|\varphi\|^2, \quad \varphi \in \mathcal{G}.$$

**Lemma A.3.1.** Let Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be a self-adjoint nondecreasing operator function. Then the one-sided limits

$$
\Theta(t\pm), \quad t \in \mathbb{R},
$$

exist in the strong sense and are bounded self-adjoint operators. If, in addition, Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) is uniformly bounded, then Θ(±∞) exist in the strong sense and are bounded self-adjoint operators.

Proof. Let <sup>t</sup> <sup>∈</sup> <sup>R</sup> and choose an increasing sequence <sup>t</sup><sup>n</sup> <sup>→</sup> <sup>t</sup>−. It is no restriction to assume that Θ(tn) <sup>≥</sup> 0 for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>. Then for every <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> the sequence (Θ(tn)ϕ, ϕ) is nondecreasing and bounded by (Θ(t)ϕ, ϕ) ≤ Θ(t) ϕ <sup>2</sup>. Hence, (Θ(tn)ϕ, ϕ) converges to some <sup>ν</sup>ϕ,ϕ <sup>∈</sup> <sup>R</sup>. Via polarization one finds that (Θ(tn)ϕ, ψ) converges for all ϕ, ψ ∈ G and therefore

$$\mathcal{G} \times \mathcal{G} \ni \varphi \times \psi \mapsto \lim\_{n \to \infty} (\Theta(t\_n)\varphi, \psi).$$

is a symmetric sesquilinear form which is continuous, because

$$\begin{aligned} \left| \lim\_{n \to \infty} (\Theta(t\_n)\varphi, \psi) \right| &\leq \lim\_{n \to \infty} (\Theta(t\_n)\varphi, \varphi)^{1/2} (\Theta(t\_n)\psi, \psi)^{1/2} \\ &\leq \|\Theta(t)\| \|\varphi\| \|\psi\|, \end{aligned}$$

where the Cauchy–Schwarz inequality was used for the nonnegative sesquilinear form (Θ(tn)·, ·). Thus, there exists a self-adjoint operator Ω ∈ **B**(G) such that

$$\lim\_{n \to \infty} (\Theta(t\_n)\varphi, \psi) = (\Omega \varphi, \psi)$$

for all ϕ, ψ ∈ G. Then Ω−Θ(tn) ≤ Ω + Θ(t) , while Ω−Θ(tn) ≥ 0. Recall the Cauchy–Schwarz inequality Af <sup>2</sup> <sup>≤</sup>A (Af, f), f ∈ G, for nonnegative operators A ∈ **B**(G). Thus, one obtains

$$\| (\Omega - \Theta(t\_n))\varphi \| ^2 \le (\|\Omega\| + \|\Theta(t)\|) \left( (\Omega - \Theta(t\_n))\varphi, \varphi \right) \to 0$$

for n → ∞ and ϕ ∈ G, i.e., Θ(t−) exists in the strong sense. Similar arguments show that Θ(t+), <sup>t</sup> <sup>∈</sup> <sup>R</sup>, exists in the strong sense. If, in addition, Θ is uniformly bounded one verifies in the same way that the limits Θ(±∞) exist in the strong sense and are bounded self-adjoint operators. -

**Corollary A.3.2.** Let Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be a self-adjoint nondecreasing operator function which is uniformly bounded. Then for every compact interval [a, b] one has

$$0 \le \left( (\Theta(b) - \Theta(a))\varphi, \varphi \right) \le \left( (\Theta(+\infty) - \Theta(-\infty))\varphi, \varphi \right), \quad \varphi \in \mathcal{G},$$

and consequently

$$\| (\Theta(b) - \Theta(a)) \| \le \| \Theta(+\infty) - \Theta(-\infty) \|.$$

After these preliminaries the operator-valued integrals will be introduced. Let [a, b] be a compact interval and let Θ : [a, b] → **B**(G) be a self-adjoint nondecreasing operator function. Let <sup>f</sup> : [a, <sup>b</sup>] <sup>→</sup> <sup>C</sup> be a continuous function. For a finite partition a = t<sup>0</sup> < t<sup>1</sup> < ··· < t<sup>n</sup> = b of the interval [a, b], define the Riemann–Stieltjes sum

$$S\_n := \sum\_{i=1}^n f(t\_i) \left( \Theta(t\_i) - \Theta(t\_{i-1}) \right). \tag{A.3.1}$$

The bounded operators S<sup>n</sup> converge in **B**(G) if max |t<sup>i</sup> − t<sup>i</sup>−<sup>1</sup>| tends to zero. The limit will be called the operator Riemann–Stieltjes integral of f with respect to Θ and will be denoted by

$$\int\_{a}^{b} f(t) \, d\Theta(t) \in \mathbf{B}(\mathcal{G}). \tag{A.3.2}$$

**Lemma A.3.3.** Let [a, b] be a compact interval and let Θ:[a, b] → **B**(G) be a self-adjoint nondecreasing operator function. Let <sup>f</sup> : [a, b] <sup>→</sup> <sup>C</sup> be a continuous function. Then for all ϕ, ψ ∈ G

$$\int \left( \int\_a^b f(t) \, d\Theta(t) \right) \varphi, \psi \right) = \int\_a^b f(t) \, d(\Theta(t)\varphi, \psi). \tag{A.3.3}$$

Moreover, for all ϕ ∈ G,

$$\left\| \left( \int\_{a}^{b} f(t) \, d\Theta(t) \right) \varphi \right\|^2 \le \left\| \Theta(b) - \Theta(a) \right\| \int\_{a}^{b} \left| f(t) \right|^2 d(\Theta(t)\varphi, \varphi). \tag{A.3.4}$$

In particular, for all ϕ ∈ G,

$$\left\| \left( \int\_{a}^{b} f(t) \, d\Theta(t) \right) \varphi \right\| \le \left( \sup\_{t \in [a,b]} |f(t)| \right) \|\Theta(b) - \Theta(a)\| \|\varphi\|. \tag{A.3.5}$$

Proof. It is clear from the definition involving the Riemann–Stieltjes sums in (A.3.1) that the identity (A.3.3) holds.

To see that (A.3.4) holds, observe first that

$$\left\|\left\|T\_1\varphi\_1 + \dots + T\_n\varphi\_n\right\|\right\|^2 \le \left\|\left\|T\_1T\_1^\* + \dots + T\_nT\_n^\*\right\|\left(\left\|\varphi\_1\right\|^2 + \dots + \left\|\varphi\_n\right\|^2\right),\tag{A.3.6}$$

where T1,...,T<sup>n</sup> ∈ **B**(G) and ϕ1,...,ϕ<sup>n</sup> ∈ G. One verifies (A.3.6) by interpreting the row (T<sup>1</sup> ...Tn) as a bounded operator A from G×···×G to G and recalling that A <sup>2</sup> <sup>=</sup> AA<sup>∗</sup> . Now rewrite the following Riemann–Stieltjes sum as indicated:

$$\begin{aligned} \sum\_{i=1}^n f(t\_i) \left( \Theta(t\_i) - \Theta(t\_{i-1}) \right) \varphi \\ &= \sum\_{i=1}^n \left( \Theta(t\_i) - \Theta(t\_{i-1}) \right)^{\frac{1}{2}} f(t\_i) \left( \Theta(t\_i) - \Theta(t\_{i-1}) \right)^{\frac{1}{2}} \varphi. \end{aligned}$$

The right-hand side may be written as T1ϕ<sup>1</sup> + ··· + Tnϕn, where

$$T\_i = \left(\Theta(t\_i) - \Theta(t\_{i-1})\right)^{\frac{1}{2}} \in \mathbf{B}(\mathcal{G}), \ \varphi\_i = f(t\_i) \left(\Theta(t\_i) - \Theta(t\_{i-1})\right)^{\frac{1}{2}} \varphi \in \mathcal{G}$$

for i = 1,...,n. Furthermore, one has

$$T\_1 T\_1^\* + \dots + T\_n T\_n^\* = \sum\_{i=1}^n \left( \Theta(t\_i) - \Theta(t\_{i-1}) \right) = \Theta(b) - \Theta(a).$$

Hence, the general estimate (A.3.6) above gives

$$\begin{aligned} \left\| \sum\_{i=1}^n f(t\_i) \left( \Theta(t\_i) - \Theta(t\_{i-1}) \right) \varphi \right\|^2 &= \left\| T\_1 \varphi\_1 + \dots + T\_n \varphi\_n \right\|^2 \\ &\le \left\| \Theta(b) - \Theta(a) \right\| \left( \left\| \varphi\_1 \right\|^2 + \dots + \left\| \varphi\_n \right\|^2 \right) . \end{aligned}$$

Since

$$\begin{aligned} \left\| \varphi\_1 \right\|^2 + \dots + \left\| \varphi\_n \right\|^2 &= \sum\_{i=1}^n |f(t\_i)|^2 \left\| (\Theta(t\_i) - \Theta(t\_{i-1}))^{\frac{1}{2}} \varphi \right\|^2 \\ &= \sum\_{i=1}^n |f(t\_i)|^2 \left( (\Theta(t\_i) - \Theta(t\_{i-1})) \varphi, \varphi \right) \end{aligned}$$

one concludes (A.3.4) with a limit argument. Finally, (A.3.5) is an immediate consequence of (A.3.4) and

$$\int\_a^b d(\Theta(t)\varphi,\varphi) = \left( (\Theta(b) - \Theta(a))\varphi,\varphi \right) \le \|\Theta(b) - \Theta(a)\| \|\varphi\|^2.$$

This completes the proof. -

The integral in (A.3.2) enjoys the usual linearity properties. The nonnegativity property

$$f(t) \ge 0, \ t \in [a, b] \quad \Rightarrow \quad \int\_a^b f(t) \, d\Theta(t) \ge 0$$

is a direct consequence of (A.3.3) of the previous lemma. Moreover, if c is a point of the open interval (a, b), then

$$\int\_{a}^{b} f(t) \, d\Theta(t) = \int\_{a}^{c} f(t) \, d\Theta(t) + \int\_{c}^{b} f(t) \, d\Theta(t). \tag{A.3.7}$$

It follows from (A.3.7) that the integral defined in (A.3.2) and the properties in Lemma A.3.3 remain valid for functions <sup>f</sup> : [a, b] <sup>→</sup> <sup>C</sup> that are piecewise continuous.

Now the integral in (A.3.2) will be extended to an improper Riemann– Stieltjes integral on R under the assumption that the self-adjoint nondecreasing operator function Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) is uniformly bounded; cf. Lemma A.3.1. Note that the restriction to bounded continuous functions guarantees the existence of the improper integral. However, it is clear that the results remain valid for bounded functions f that are continuous up to finitely many points.

**Proposition A.3.4.** Let Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be a self-adjoint nondecreasing operator function which is uniformly bounded and let <sup>f</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a bounded continuous function. Then there exists a unique linear operator

$$\int\_{\mathbb{R}} f(t) \, d\Theta(t) \in \mathbf{B}(\mathcal{G}) \tag{A.3.8}$$

such that

$$\int\_{\mathbb{R}} \left( f(t) \, d\Theta(t) \right) \varphi = \lim\_{a \to -\infty} \lim\_{b \to \infty} \left( \int\_{a}^{b} f(t) \, d\Theta(t) \right) \varphi \tag{A.3.9}$$

for all ϕ ∈ G, and

$$\int\_{\mathbb{R}} \left( \int\_{\mathbb{R}} f(t) \, d\Theta(t) \right) \varphi, \psi \right) = \int\_{\mathbb{R}} f(t) \, d(\Theta(t)\varphi, \psi) \tag{A.3.10}$$

for all ϕ, ψ ∈ G. Moreover,

$$\left\| \left( \int\_{\mathbb{R}} f(t) \, d\Theta(t) \right) \varphi \right\|^2 \le \left\| \Theta(+\infty) - \Theta(-\infty) \right\| \int\_{\mathbb{R}} |f(t)|^2 \, d(\Theta(t)\varphi, \varphi) \tag{A.3.11} $$

for all ϕ ∈ G. In particular,

$$\left\| \left( \int\_{\mathbb{R}} f(t) \, d\Theta(t) \right) \varphi \right\| \le \sup\_{t \in \mathbb{R}} |f(t)| \|\Theta(+\infty) - \Theta(-\infty)\| \|\varphi\|\tag{A.3.12}$$

for all ϕ ∈ G.

Proof. By assumption, there exists some M > 0 such that for all ϕ ∈ G

$$|(\Theta(t)\varphi,\varphi)| \le M\|\varphi\|^2, \qquad t \in \mathbb{R};\tag{A.3.13}$$

cf. Lemma A.3.1. First the existence of the limit in (A.3.9) will be verified. With the estimate (A.3.13) the inequality (A.3.4) may be written as

$$\left\| \left( \int\_{a}^{b} f(t) \, d\Theta(t) \right) \varphi \right\|^{2} \le 2M \int\_{a}^{b} |f(t)|^{2} \, d(\Theta(t)\varphi, \varphi). \tag{A.3.14}$$

Now consider two compact intervals [a, b] ⊂ [a- , b- ] and observe from (A.3.7) that

$$\int\_{a'}^{b'} f(t) \, d\Theta(t) - \int\_{a}^{b} f(t) \, d\Theta(t) = \int\_{a'}^{a} f(t) \, d\Theta(t) + \int\_{b}^{b'} f(t) \, d\Theta(t).$$
  $\text{i.e., } \dots, \dots, \dots, \dots \dots \dots \dots \dots$ 

Hence, for every ϕ ∈ G one has

$$\begin{split} & \left\| \int\_{a'}^{b'} f(t) \, d\Theta(t) \varphi - \int\_{a}^{b} f(t) \, d\Theta(t) \varphi \right\|^2 \\ & \qquad \le 2 \left\| \int\_{a'}^{a} f(t) \, d\Theta(t) \varphi \right\|^2 + 2 \left\| \int\_{b}^{b'} f(t) \, d\Theta(t) \varphi \right\|^2 \\ & \qquad \le 4M \int\_{a'}^{a} |f(t)|^2 \, d(\Theta(t)\varphi, \varphi) + 4M \int\_{b}^{b'} |f(t)|^2 \, d(\Theta(t)\varphi, \varphi), \end{split} \tag{A.3.15}$$

where the estimate (A.3.14) has been used. The right-hand side of (A.3.15) gives a Cauchy sequence for a, a- → −∞ and b, b-→ +∞, since for all ϕ ∈ G

$$\begin{aligned} \int\_{\mathbb{R}} |f(t)|^2 \, d(\Theta(t)\varphi, \varphi) &\leq \|f\|\_{\infty}^2 \int\_{\mathbb{R}} d(\Theta(t)\varphi, \varphi) \\ &= \|f\|\_{\infty}^2 \left( (\Theta(+\infty)\varphi, \varphi) - (\Theta(-\infty)\varphi, \varphi) \right) \\ &\leq 2M \|f\|\_{\infty}^2 \|\varphi\|^2 < \infty. \end{aligned}$$

Therefore, the strong limit on the right-hand side of (A.3.9) exists. This limit is denoted by the left-hand side of (A.3.9).

To verify (A.3.10), observe that the left-hand side of (A.3.10) is given by

$$\lim\_{a \to -\infty} \lim\_{b \to \infty} \left( \left( \int\_a^b f(t) \, d\Theta(t) \right) \varphi, \psi \right) = \lim\_{a \to -\infty} \lim\_{b \to \infty} \int\_a^b f(t) \, d(\Theta(t)\varphi, \psi),$$

where (A.3.3) was used. The statement now follows from the dominated convergence theorem.

Finally, as to (A.3.11) and (A.3.8), recall from (A.3.4) that for every ϕ ∈ G and for every compact interval [a, b] one has the estimate

$$\begin{split} \left\| \left( \int\_{a}^{b} f(t) \, d\Theta(t) \right) \varphi \right\|^{2} &\leq \| \Theta(b) - \Theta(a) \| \int\_{a}^{b} |f(t)|^{2} \, d(\Theta(t)\varphi, \varphi) \\ &\leq \| \Theta(+\infty) - \Theta(-\infty) \| \int\_{\mathbb{R}} |f(t)|^{2} \, d(\Theta(t)\varphi, \varphi), \end{split} \tag{A.3.16}$$

where in the last inequality Corollary A.3.2 and the dominated convergence theorem have been used. Clearly, (A.3.11) follows from (A.3.16). This also leads to (A.3.12), which implies (A.3.8). -

The linearity and nonnegativity properties are preserved for the improper Riemann–Stieltjes integral. The adjoint of <sup>R</sup> f(t) dΘ(t) is given by

$$\left(\int\_{\mathbb{R}} f(t) \, d\Theta(t)\right)^{\*} = \int\_{\mathbb{R}} \overline{f(t)} \, d\Theta(t). \tag{A.3.17}$$

Another immediate consequence is the following limit result.

**Corollary A.3.5.** Let Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be a self-adjoint nondecreasing operator function which is uniformly bounded. Let <sup>f</sup><sup>n</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a sequence of continuous functions which is uniformly bounded. Let <sup>f</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a bounded continuous function such that limn→∞ <sup>f</sup>n(t) = <sup>f</sup>(t) for all <sup>t</sup> <sup>∈</sup> <sup>R</sup>. Then for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>

$$\left(\int\_{\mathbb{R}} f\_n(t) \, d\Theta(t)\right)\varphi \to \left(\int\_{\mathbb{R}} f(t) \, d\Theta(t)\right)\varphi.$$

Proof. Use the linearity property and Proposition A.3.4 to conclude that for all ϕ ∈ G

$$\begin{aligned} & \left\| \left( \int\_{\mathbb{R}} f(t) \, d\Theta(t) \right) \varphi - \left( \int\_{\mathbb{R}} f\_n(t) \, d\Theta(t) \right) \varphi \right\|^2 \\ & \qquad \le \left\| \Theta(+\infty) - \Theta(-\infty) \right\| \int\_{\mathbb{R}} |f(t) - f\_n(t)|^2 \, d(\Theta(t)\varphi, \varphi). \end{aligned}$$

Now apply the dominated convergence theorem. -

The next goal is to extend the operator-valued Riemann–Stieltjes integral to the case where the self-adjoint nondecreasing **B**(G)-valued function appearing in the integral is not uniformly bounded. In principle this more general situation will be reduced to the case discussed above. In the following consider a (not necessarily uniformly bounded) self-adjoint nondecreasing operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G). Let <sup>ω</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> be a continuous positive function with a positive lower bound, and define the operator function Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) by

$$\Theta(t) = \int\_0^t \frac{d\Sigma(s)}{\omega(s)}, \quad t \in \mathbb{R}. \tag{A.3.18}$$

The function Θ can be used to extend the definition of the integral in Proposition A.3.4. First a preliminary lemma is needed.

**Lemma A.3.6.** Let Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be a self-adjoint nondecreasing operator function and let <sup>ω</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> be a continuous positive function with a positive lower bound. Then Θ in (A.3.18) defines a self-adjoint nondecreasing operator function from <sup>R</sup> to **<sup>B</sup>**(G). Moreover, for every bounded continuous function <sup>f</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> and for every compact interval [a, b] one has

$$\int\_{a}^{b} f(t) \, \frac{d\Sigma(t)}{\omega(t)} = \int\_{a}^{b} f(t) \, d\Theta(t) \in \mathbf{B}(\mathcal{G}).\tag{A.3.19}$$

Proof. It follows from Lemma A.3.3 that Θ(t), <sup>t</sup> <sup>∈</sup> <sup>R</sup>, in (A.3.18) is well defined and that Θ(t) ∈ **B**(G). Moreover,

$$\delta\left(\Theta(t)\varphi,\varphi\right) = \int\_0^t \frac{d\left(\Sigma(s)\varphi,\varphi\right)}{\omega(s)}, \quad t \in \mathbb{R},\tag{A.3.20}$$

for all ϕ ∈ G; cf. Lemma A.3.3. Thus, Θ(t) is self-adjoint and it follows that

$$
\left(\Theta(t\_2)\varphi,\varphi\right) - \left(\Theta(t\_1)\varphi,\varphi\right) = \int\_{t\_1}^{t\_2} \frac{d\left(\Sigma(s)\varphi,\varphi\right)}{\omega(s)}, \quad t\_1 \le t\_2,
$$

so that Θ is a nondecreasing operator function. Since the functions f /ω and f are continuous on [a, b], Lemma A.3.3 shows that both integrals

$$\int\_{a}^{b} f(t) \frac{d\Sigma(t)}{\omega(t)} \quad \text{and} \quad \int\_{a}^{b} f(t) \, d\Theta(t)$$

belong to **B**(G). Observe that, for all ϕ ∈ G,

$$\begin{aligned} \left( \left( \int\_a^b f(t) \frac{d\Sigma(t)}{\omega(t)} \right) \varphi, \varphi \right) &= \int\_a^b f(t) \frac{d(\Sigma(t)\varphi, \varphi)}{\omega(t)} \\ &= \int\_a^b f(t) \, d(\Theta(t)\varphi, \varphi) \\ &= \left( \left( \int\_a^b f(t) d\Theta(t) \right) \varphi, \varphi \right). \end{aligned}$$

Here the first and the third equality follow from Lemma A.3.3, while the second equality uses the Radon–Nikod´ym derivative in (A.3.20). By polarization,

$$\left( \left( \int\_a^b f(t) \frac{d\Sigma(t)}{\omega(t)} \right) \varphi, \psi \right) = \left( \left( \int\_a^b f(t) d\Theta(t) \right) \varphi, \psi \right).$$

for all ϕ, ψ ∈ G, and hence

$$\left(\int\_{a}^{b} f(t) \frac{d\Sigma(t)}{\omega(t)}\right) \varphi = \left(\int\_{a}^{b} f(t) d\Theta(t)\right) \varphi \tag{A.3.21}$$

holds for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. This implies (A.3.19). -

As a consequence of Lemma A.3.6 one may recover the function Σ from the function Θ, since for every compact interval [a, b] <sup>⊂</sup> <sup>R</sup> one has

$$\int\_{a}^{b} d\Sigma(t) = \int\_{a}^{b} \omega(t) \, d\Theta(t).$$

Now the definition of the integral in Proposition A.3.4 is extended to "unbounded" measures by means of a "Radon–Nikod´ym" derivative.

**Proposition A.3.7.** Let Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be a self-adjoint nondecreasing operator function, let <sup>ω</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> be a continuous positive function with a positive lower bound, and let Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be defined by (A.3.18). Then the following statements are equivalent:

(i) For all ϕ ∈ G one has

$$\int\_{\mathbb{R}} \frac{d(\Sigma(s)\varphi, \varphi)}{\omega(s)} < \infty. \tag{A.3.22}$$

(ii) In the sense of strong limits one has

$$\int\_{\mathbb{R}} \frac{d\Sigma(t)}{\omega(t)} \in \mathbf{B}(\mathcal{G}).$$

(iii) The nondecreasing operator function Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) in (A.3.18) is uniformly bounded.

$$\sqcap$$

Assume that either of the above conditions is satisfied. Let <sup>f</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a bounded continuous function. Then

$$\int\_{\mathbb{R}} f(t) \, \frac{d\Sigma(t)}{\omega(t)} = \int\_{\mathbb{R}} f(t) \, d\Theta(t),\tag{A.3.23}$$

where each side belongs to **B**(G) and is meant as the strong limit of the correponding operators in the identity (A.3.19), respectively.

Proof. (i) ⇒ (iii) Assume the condition (A.3.22). Then it follows from (A.3.18) and the monotone convergence theorem that

$$\|\| ( (\Theta(b) - \Theta(a))^{\frac{1}{2}} \varphi \|\|^{2} = \int\_{a}^{b} \frac{d(\Sigma(s)\varphi, \varphi)}{\omega(s)} \le \int\_{\mathbb{R}} \frac{d(\Sigma(s)\varphi, \varphi)}{\omega(s)}$$

for every compact interval [a, b]. The assumption (A.3.22) and the uniform boundedness principle show the existence of a constant M such that

$$\| (\Theta(b) - \Theta(a))^{\frac{1}{2}} \| \le M$$

for every compact interval [a, b]. In particular, this leads to the inequality

$$|(\Theta(b) - \Theta(a))\varphi, \varphi)| \le M^2 \|\varphi\|^2,$$

which implies that the nondecreasing operator function Θ is uniformly bounded. This gives (iii).

(iii) <sup>⇒</sup> (ii) Assume that Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) in (A.3.18) is uniformly bounded. It follows from (A.3.19) that for every compact interval [a, b] <sup>⊂</sup> <sup>R</sup>

$$\int\_{a}^{b} \frac{d\Sigma(t)}{\omega(t)} = \int\_{a}^{b} d\Theta(t) = \Theta(b) - \Theta(a),$$

where each side belongs to **B**(G). The result now follows by taking strong limits for [a, b] <sup>→</sup> <sup>R</sup>.

(ii) ⇒ (i) The assumption means that for all ϕ ∈ G

$$\left(\int\_{\mathbb{R}} \frac{d\Sigma(t)}{\omega(t)}\right)\varphi = \lim\_{a \to -\infty} \lim\_{b \to \infty} \left(\int\_{a}^{b} \frac{d\Sigma(t)}{\omega(t)}\right)\varphi,$$

with convergence in G. In particular, this gives that

$$\begin{aligned} \left( \left( \int\_{\mathbb{R}} \frac{d\Sigma(t)}{\omega(t)} \right) \varphi, \varphi \right) &= \lim\_{a \to -\infty} \lim\_{b \to \infty} \left( \left( \int\_{a}^{b} \frac{d\Sigma(t)}{\omega(t)} \right) \varphi, \varphi \right) \\ &= \lim\_{a \to -\infty} \lim\_{b \to \infty} \int\_{a}^{b} \frac{d(\Sigma(t)\varphi, \varphi)}{\omega(t)} \\ &= \int\_{\mathbb{R}} \frac{d(\Sigma(t)\varphi, \varphi)}{\omega(t)}, \end{aligned}$$

where the second step is justified by Lemma A.3.3 and the last step by the monotone convergence theorem. This leads to (A.3.22).

By Proposition A.3.4, the integral on the right-hand side of (A.3.21) converges strongly to

$$\int\_{\mathbb{R}} f(t)d\Theta(t) \in \mathbf{B}(\mathcal{G}).$$

Together with the identity (A.3.19) this leads to the assertion in (ii). -

For the reader's convenience the following facts are mentioned in terms of Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) and <sup>ω</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> from Proposition A.3.7. They can be easily verified via the identity (A.3.23). Let <sup>f</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a bounded continuous function. Then

$$\left(\int\_{\mathbb{R}} f(t) \frac{d\Sigma(t)}{\omega(t)}\right)^{\*} = \int\_{\mathbb{R}} \overline{f(t)} \, \frac{d\Sigma(t)}{\omega(t)} \in \mathbf{B}(\mathcal{G});\tag{A.3.24}$$

cf. (A.3.17), and for all ϕ ∈ G

$$\left(\int\_{\mathbb{R}} f(t) \frac{d\Sigma(t)}{\omega(t)} \varphi, \varphi\right) = \int\_{\mathbb{R}} f(t) \frac{d(\Sigma(t)\varphi, \varphi)}{\omega(t)};\tag{A.3.25}$$

cf. (A.3.10). Furthermore, if <sup>f</sup><sup>n</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> is a sequence of continuous functions which is uniformly bounded and <sup>f</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> is a bounded continuous function such that limn→∞ <sup>f</sup>n(t) = <sup>f</sup>(t) for all <sup>t</sup> <sup>∈</sup> <sup>R</sup>, then for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>

$$\left(\int\_{\mathbb{R}} f\_n(t) \, \frac{d\Sigma(t)}{\omega(t)}\right) \varphi \to \left(\int\_{\mathbb{R}} f(t) \, \frac{d\Sigma(t)}{\omega(t)}\right) \varphi;\tag{A.3.26}$$

cf. Corollary A.3.5. It is also clear that Proposition A.3.7 and the above properties of the integral remain true for bounded functions <sup>f</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> with finitely many discontinuities.

The following notation will be used later: Consider an interval <sup>I</sup> <sup>⊂</sup> <sup>R</sup>, let <sup>χ</sup><sup>I</sup> be the corresponding characteristic function, and let <sup>f</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> be a bounded continuous function. Then χ<sup>I</sup> f is a piecewise continuous function and one defines

$$\int\_{I} f(t) \frac{d\Sigma(t)}{\omega(t)} = \int\_{\mathbb{R}} \chi\_{I}(t) f(t) \frac{d\Sigma(t)}{\omega(t)}.\tag{A.3.27}$$

In particular, for <sup>c</sup> <sup>∈</sup> <sup>R</sup> integrals of the form

$$\int\_{[c,\infty)} f(t) \, \frac{d\Sigma(t)}{\omega(t)} = \int\_{\mathbb{R}} \chi\_{[c,\infty)}(t) f(t) \, \frac{d\Sigma(t)}{\omega(t)}\tag{A.3.28}$$

will appear in the context of Nevanlinna functions that admit an analytic continuation to (−∞, c); see Section A.6.

## **A.4 Operator-valued Nevanlinna functions**

The notion of scalar Nevanlinna function in Definition A.2.3 is easily carried over to the operator-valued case. The present section is concerned with developing the corresponding operator-valued integral representations. The main tool is provided by Proposition A.3.7.

**Definition A.4.1.** Let <sup>G</sup> be a Hilbert space and let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be an operator function. Then F is called a **B**(G)-valued Nevanlinna function if

(i) <sup>F</sup> is holomorphic on <sup>C</sup> \ <sup>R</sup>;

$$\text{(ii)}\ F(\lambda) = F(\overline{\lambda})^\*, \ \lambda \in \mathbb{C} \backslash \mathbb{R};$$

(iii) Im <sup>F</sup>(λ)/Im <sup>λ</sup> <sup>≥</sup> 0, <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

Operator-valued Nevanlinna functions admit integral representations as in the scalar case; cf. Theorem A.2.5.

**Theorem A.4.2.** Let <sup>G</sup> be a Hilbert space and let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be an operator function. Then the following statements are equivalent:

(i) F has an integral representation of the form

$$F(\lambda) = \alpha + \lambda \beta + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) d\Sigma(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \tag{A.4.1}$$

with self-adjoint operators α, β ∈ **B**(G), β ≥ 0, and a nondecreasing selfadjoint operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) such that

$$\int\_{\mathbb{R}} \frac{d\Sigma(t)}{t^2 + 1} \in \mathbf{B}(\mathcal{G}),\tag{A.4.2}$$

where the integrals in (A.4.1) and (A.4.2) converge in the strong topology. (ii) F is a Nevanlinna function.

Note that for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> the identity (A.4.1) can be rewritten as

$$F(\lambda) = \alpha + \lambda \beta + \int\_{\mathbb{R}} f\_{\lambda}(t) \, d\Theta(t), \quad \lambda \in \mathbb{C} \,\backslash \,\mathbb{R},\tag{A.4.3}$$

where the bounded continuous function <sup>f</sup><sup>λ</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> and the "bounded measure" Θ are given by

$$f\_{\lambda}(t) = \frac{1 + \lambda t}{t - \lambda} \quad \text{and} \quad \Theta(t) = \int\_{0}^{t} \frac{d\Sigma(s)}{s^{2} + 1}, \quad t \in \mathbb{R}, \tag{A.4.4}$$

respectively; cf. (A.3.18) and (A.3.23) with ω(t) = t <sup>2</sup> + 1. Proof. (i) ⇒ (ii) Due to the condition (A.4.2) it follows that F(λ) in (A.4.1) is well defined and represents an element in **B**(G); cf. (A.4.3), (A.4.4), and Proposition A.3.7. Moreover, it follows from (A.4.3) and (A.3.25) that for ϕ ∈ G

$$(F(\lambda)\varphi,\varphi) = (\alpha\varphi,\varphi) + \lambda(\beta\varphi,\varphi) + \int\_{\mathbb{R}} f\_{\lambda}(t) \, d(\Theta(t)\varphi,\varphi), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Therefore, one sees that the function <sup>F</sup> is holomorphic on <sup>C</sup> \ <sup>R</sup>. It is clear that <sup>F</sup>(λ)<sup>∗</sup> <sup>=</sup> <sup>F</sup>(λ); cf. (A.3.24). Since (αϕ, ϕ) <sup>∈</sup> <sup>R</sup> and (βϕ, ϕ) <sup>∈</sup> <sup>R</sup>, one also sees that

$$\operatorname{Im}\left(F(\lambda)\varphi,\varphi\right) = \left(\operatorname{Im}\lambda\right)\left[\left(\beta\varphi,\varphi\right) + \int\_{\mathbb{R}} \frac{t^2 + 1}{|t - \lambda|^2} d(\Theta(t)\varphi,\varphi)\right], \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Therefore, Im (F(λ)ϕ, ϕ)/Im <sup>λ</sup> <sup>≥</sup> 0 for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. This implies that <sup>F</sup> is a Nevanlinna function.

(ii) <sup>⇒</sup> (i) Let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be an operator-valued Nevanlinna function. Then M(λ) = F(λ) − Re F(i) is also a Nevanlinna function and Re M(i) = 0. For ϕ ∈ G the function (M(λ)ϕ, ϕ) is a scalar Nevanlinna function with the integral representation

$$(M(\lambda)\varphi,\varphi) = \lambda\beta\_{\varphi,\varphi} + \int\_{\mathbb{R}} \frac{1+\lambda t}{t-\lambda} \, d\theta\_{\varphi,\varphi}(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

where <sup>β</sup>ϕ,ϕ <sup>≥</sup> 0 and <sup>θ</sup>ϕ,ϕ : <sup>R</sup> <sup>→</sup> <sup>R</sup> is a nondecreasing function; cf. Theorem A.2.4 and Lemma A.2.6. Then it follows that

$$\operatorname{Im}\left(M(i)\varphi,\varphi\right) = \beta\_{\varphi,\varphi} + \int\_{\mathbb{R}} d\theta\_{\varphi,\varphi}(t),$$

where each term on the right-hand side is nonnegative, so that

$$0 \le \beta\_{\varphi,\varphi} + \int\_{\mathbb{R}} d\theta\_{\varphi,\varphi}(t) = \mathrm{Im}\left(M(i)\varphi,\varphi\right) \le \|M(i)\| \|\varphi\|^2, \quad \varphi \in \mathcal{G}.$$

Without loss of generality it will be assumed that

$$
\theta\_{\varphi,\varphi}(-\infty) = 0 \quad \text{and} \quad \theta\_{\varphi,\varphi}(t) = \frac{\theta\_{\varphi,\varphi}(t+) - \theta\_{\varphi,\varphi}(t-)}{2}, \ t \in \mathbb{R}, \tag{A.4.5}
$$

(see also the proof of Theorem A.2.4) and thus

$$0 \le \beta\_{\varphi,\varphi} \le \|M(i)\| \|\varphi\|^2 \quad \text{and} \quad 0 \le \theta\_{\varphi,\varphi}(t) \le \|M(i)\| \|\varphi\|^2 \tag{A.4.6}$$

for all <sup>t</sup> <sup>∈</sup> <sup>R</sup>. For ϕ, ψ <sup>∈</sup> <sup>G</sup> one defines by polarization

$$
\beta\_{\varphi,\psi} = \frac{1}{4} \left( \beta\_{\varphi+\psi,\varphi+\psi} - \beta\_{\varphi-\psi,\varphi-\psi} + i\beta\_{\varphi+i\psi,\varphi+i\psi} - i\beta\_{\varphi-i\psi,\varphi-i\psi} \right),
$$

#### A.4. Operator-valued Nevanlinna functions 657

and similarly

$$\theta\_{\varphi,\psi} = \frac{1}{4} (\theta\_{\varphi+\psi,\varphi+\psi} - \theta\_{\varphi-\psi,\varphi-\psi} + i\theta\_{\varphi+i\psi,\varphi+i\psi} - i\theta\_{\varphi-i\psi,\varphi-i\psi}).\tag{A.4.7}$$

Then it follows that

$$(M(\lambda)\varphi,\psi) = \lambda\beta\_{\varphi,\psi} + \int\_{\mathbb{R}} \frac{1+\lambda t}{t-\lambda} \, d\theta\_{\varphi,\psi}(t), \quad \lambda \in \mathbb{C} \,\backslash \,\mathbb{R},\tag{A.4.8}$$

where βϕ,ψ is determined by

$$\beta\_{\varphi,\psi} = \lim\_{y \to \infty} \frac{(M(iy)\varphi,\psi)}{iy}$$

(see (A.2.10) in the proof of Lemma A.2.6) and <sup>θ</sup>ϕ,ψ : <sup>R</sup> <sup>→</sup> <sup>C</sup> is a function of bounded variation.

Now the representation (A.4.8) will be used to verify that the bounded forms

$$\{\varphi,\psi\} \mapsto \beta\_{\varphi,\psi}, \quad \{\varphi,\psi\} \mapsto \theta\_{\varphi,\psi}(t), \tag{A.4.9}$$

are sesquilinear. For instance, with ϕ, ϕ- , ψ ∈ G it follows that

$$\begin{split} \lambda \beta\_{\varphi + \varphi', \psi} &+ \int\_{\mathbb{R}} \frac{1 + \lambda t}{t - \lambda} \, d\theta\_{\varphi + \varphi', \psi}(t) \\ &= \left( M(\lambda)(\varphi + \varphi'), \psi \right) \\ &= \left( M(\lambda)\varphi, \psi \right) + \left( M(\lambda)\varphi', \psi \right) \\ &= \lambda \beta\_{\varphi, \psi} + \int\_{\mathbb{R}} \frac{1 + \lambda t}{t - \lambda} \, d\theta\_{\varphi, \psi}(t) + \lambda \beta\_{\varphi', \psi} + \int\_{\mathbb{R}} \frac{1 + \lambda t}{t - \lambda} \, d\theta\_{\varphi', \psi}(t) \\ &= \lambda (\beta\_{\varphi, \psi} + \beta\_{\varphi', \psi}) + \int\_{\mathbb{R}} \frac{1 + \lambda t}{t - \lambda} \, d(\theta\_{\varphi, \psi}(t) + \theta\_{\varphi', \psi}(t)). \end{split}$$

After dividing this equality by λ = iy and letting y → ∞, one concludes that

$$
\beta\_{\varphi + \varphi', \psi} = \beta\_{\varphi, \psi} + \beta\_{\varphi', \psi}. \tag{A.4.10}
$$

Setting θ <sup>=</sup> <sup>θ</sup>ϕ+ϕ-,ψ − θϕ,ψ − θϕ-,ψ, one obtains

$$0 = \int\_{\mathbb{R}} \frac{1 + \lambda t}{t - \lambda} \, d\tilde{\theta}(t) = \int\_{\mathbb{R}} \left(\lambda + \frac{1 + \lambda^2}{t - \lambda}\right) \, d\tilde{\theta}(t), \qquad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

and hence

$$\int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\tilde{\theta}(t) = -\int\_{\mathbb{R}} \frac{\lambda}{1 + \lambda^2} \, d\tilde{\theta}(t), \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Now it follows from Corollary A.1.3 that for every compact subinterval [a, b] <sup>⊂</sup> <sup>R</sup> one has

$$0 = \frac{\ddot{\theta}(b+) + \ddot{\theta}(b-)}{2} - \frac{\ddot{\theta}(a+) + \ddot{\theta}(a-)}{2} = \ddot{\theta}(b) - \ddot{\theta}(a),$$

where (A.4.5) and (A.4.7) were used in the second equality. With the help of (A.4.5) and (A.4.7) via polarization one also concludes that θ (a) <sup>→</sup> 0 for <sup>a</sup> → −∞ and therefore θ (b) = 0 for all <sup>b</sup> <sup>∈</sup> <sup>R</sup>. This implies

$$
\theta\_{\varphi+\varphi',\psi}(t) = \theta\_{\varphi,\psi}(t) + \theta\_{\varphi',\psi}(t), \qquad t \in \mathbb{R}. \tag{A.4.11}
$$

It follows from (A.4.10), (A.4.11), and similar considerations that the forms in (A.4.9) are both sesquilinear. Furthermore, these forms are nonnegative and from the Cauchy–Schwarz inequality and (A.4.6) it follows that they are bounded. Hence, there exists a uniquely determined nonnegative operator β ∈ **B**(G) such that

$$
\beta\_{\varphi,\psi} = (\beta\varphi, \psi), \qquad \varphi, \psi \in \mathcal{G},
$$

and for each fixed <sup>t</sup> <sup>∈</sup> <sup>R</sup> there exists a uniquely determined bounded operator Θ(t) ∈ **B**(G) such that

$$\theta\_{\varphi,\psi}(t) = (\Theta(t)\varphi, \psi) \qquad \varphi, \psi \in \mathcal{G}.$$

From

$$(\Theta(t)\varphi,\varphi) = \theta\_{\varphi,\varphi}(t) = \theta\_{\varphi,\varphi}(t) = (\varphi,\Theta(t)\varphi), \quad \varphi \in \mathcal{G},$$

one concludes that (Θ(t)ϕ, ψ)=(ϕ, Θ(t)ψ), so that Θ(t) is a self-adjoint operator in **B**(G). Furthermore, for t ≤ t one sees that

$$
\theta\_{\varphi}(t) \le \theta\_{\varphi}(t') \quad \Rightarrow \quad \Theta(t) \le \Theta(t')
$$

and therefore Θ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G), <sup>t</sup> → Θ(t), is a nondecreasing self-adjoint operator function. Thus, one obtains for all ϕ, ψ ∈ G

$$\begin{aligned} \left( \left( \lambda \beta + \int\_{\mathbb{R}} \frac{1 + \lambda t}{t - \lambda} \, d\Theta(t) \right) \varphi, \psi \right) &= \lambda \beta\_{\varphi, \psi} + \int\_{\mathbb{R}} \frac{1 + \lambda t}{t - \lambda} \, d\theta\_{\varphi, \psi}(t) \\ &= \left( (F(\lambda) - \operatorname{Re} F(i)) \varphi, \psi \right), \end{aligned}$$

which gives (A.4.3) with α = Re F(i). -

The imaginary part Im F(λ) ∈ **B**(G) of the Nevanlinna function N in Theorem A.4.2 admits the representation

$$\frac{\operatorname{Im} F(\lambda)}{\operatorname{Im} \lambda} = \beta + \int\_{\mathbb{R}} \frac{1}{|t - \lambda|^2} \, d\Sigma(t), \quad \lambda \in \mathbb{C} \,\backslash \,\mathbb{R}, \tag{A.4.12}$$

where the integral exists in the strong sense. In particular, one has for all ϕ ∈ G:

$$\frac{\langle \operatorname{Im} F(\lambda)\varphi, \varphi \rangle}{\operatorname{Im} \lambda} = (\beta \varphi, \varphi) + \int\_{\mathbb{R}} \frac{1}{|t - \lambda|^2} \, d(\Sigma(t)\varphi, \varphi), \quad \lambda \in \mathbb{C} \backslash \mathbb{R};$$

cf. (A.3.25). Hence, Im F(λ)/Im λ is a nonnegative operator in **B**(G).

The next lemma is the counterpart of Lemma A.2.6 for operator-valued Nevanlinna functions.

$$\square$$

**Lemma A.4.3.** Let F be a **B**(G)-valued Nevanlinna function with integral representation (A.4.1). The operators α, β, and the nondecreasing self-adjoint operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) are related to the function <sup>F</sup> by the following identities:

$$
\alpha = \operatorname{Re} F(i), \tag{A.4.13}
$$

$$\beta \varphi = \lim\_{y \to \infty} \frac{F(iy)}{iy} \varphi = \lim\_{y \to \infty} \frac{\operatorname{Im} F(iy)}{y} \varphi, \quad \varphi \in \mathcal{G}. \tag{A.4.14}$$

Moreover, for all <sup>x</sup> <sup>∈</sup> <sup>R</sup>

$$\lim\_{y \downarrow 0} y \{ \operatorname{Im} F(x + iy)\varphi, \varphi \} = (\Sigma(x +)\varphi, \varphi) - (\Sigma(x -)\varphi, \varphi), \quad \varphi \in \mathfrak{G}. \tag{A.4.15}$$

Proof. It is clear from (A.4.1) that (A.4.13) holds. The identities for β in (A.4.14) follow with the help of (A.3.26) in the same was as in the proof of Lemma A.2.6. Finally, note that the statement (A.4.15) is a direct consequence of Lemma A.2.6 and Lemma A.3.1. -

In the present context the Stieltjes inversion formula has the following form; cf. Lemma A.2.7.

**Lemma A.4.4.** Let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> <sup>C</sup> be a **<sup>B</sup>**(G)-valued Nevanlinna function with the integral representation (A.4.1) and let [a, b] <sup>⊂</sup> <sup>R</sup>. Assume that <sup>U</sup> is an open neighborhood of [a, b] in <sup>C</sup> and that <sup>g</sup> : <sup>U</sup> <sup>→</sup> <sup>C</sup> is holomorphic. Then

$$\begin{split} &\lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{a}^{b} \left( \{ (gF)(s + i\varepsilon) - (gF)(s - i\varepsilon) \} \varphi, \varphi \right) ds \\ & \qquad = \frac{1}{2} \int\_{\{a\}} g(t) \, d(\Sigma(t)\varphi, \varphi) + \int\_{a+}^{b-} g(t) \, d(\Sigma(t)\varphi, \varphi) + \frac{1}{2} \int\_{\{b\}} g(t) \, d(\Sigma(t)\varphi, \varphi) \end{split} \tag{A.4.16}$$

holds for all ϕ ∈ G. If the function g is entire, then (A.4.16) is valid for any compact interval [a, b]. In particular,

$$\begin{aligned} &\lim\_{\varepsilon \downarrow 0} \frac{1}{\pi} \int\_a^b (\operatorname{Im} F(s + i\varepsilon)\varphi, \varphi) \, ds \\ & \qquad = \frac{(\Sigma(b+)\varphi, \varphi) + (\Sigma(b-)\varphi, \varphi)}{2} - \frac{(\Sigma(a+)\varphi, \varphi) + (\Sigma(a-)\varphi, \varphi)}{2} \end{aligned}$$

for all ϕ ∈ G.

For operator-valued Nevanlinna functions Proposition A.2.9 has the following form.

**Proposition A.4.5.** Let F be a Nevanlinna function as in Theorem A.4.2 and let (c, d) <sup>⊂</sup> <sup>R</sup> be an open interval. Then the following statements are equivalent:


In this case

$$F(x) = \alpha + x\beta + \int\_{\mathbb{R}\backslash(c,d)} \left(\frac{1}{t-x} - \frac{t}{t^2+1}\right) d\Sigma(t), \quad x \in (c,d), \tag{A.4.17}$$

is self-adjoint; the integral in (A.4.17) converges in the strong topology. Moreover, F is nondecreasing on (c, d):

$$(F(x\_1)\varphi,\varphi) \le (F(x\_2)\varphi,\varphi), \quad c < x\_1 < x\_2 < d.$$

Proof. The implication (i) ⇒ (ii) follows from the Stieltjes inversion formula in Lemma A.4.4, which implies that t → (Σ(t)ϕ, ϕ) is constant for t ∈ (c, d) and ϕ ∈ G. For c<t<sup>1</sup> < t<sup>2</sup> < d one concludes from the inequality

$$\left| \left( (\Sigma(t\_2) - \Sigma(t\_1))\varphi, \psi \right) \right| \le \left( (\Sigma(t\_2) - \Sigma(t\_1))\varphi, \varphi \right) \left( (\Sigma(t\_2) - \Sigma(t\_1))\psi, \psi \right) = 0$$

that the function t → Σ(t)ϕ is constant on (c, d) for all ϕ ∈ G. For the implication (ii) ⇒ (i) consider the integral representation (A.4.1) in Theorem A.4.2, where the integral converges in the strong sense. Since t → Σ(t)ϕ is constant on (c, d), this representation takes the form

$$\begin{split} F(\lambda) &= \alpha + \lambda \beta + \int\_{\mathbb{R}} \chi\_{\mathbb{R}\backslash\langle c, d \rangle}(t) \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) d\Sigma(t) \\ &= \alpha + \lambda \beta + \int\_{\mathbb{R}\backslash\langle c, d \rangle} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) d\Sigma(t) \end{split} \tag{A.4.18}$$

for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>; cf. (A.3.27). It follows that <sup>λ</sup> → (F(λ)ϕ, ϕ) is holomorphic on (<sup>C</sup> \ <sup>R</sup>) <sup>∪</sup> (c, d) for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup> and hence <sup>F</sup> is holomorphic on (<sup>C</sup> \ <sup>R</sup>) <sup>∪</sup> (c, d).

To obtain (A.4.17), observe that for x ∈ (c, d) one has

$$
\chi\_{\mathbb{R}\backslash\langle c,d\rangle}(t)\left(\frac{1}{t-\lambda} - \frac{t}{t^2+1}\right) \to \chi\_{\mathbb{R}\backslash\langle c,d\rangle}(t)\left(\frac{1}{t-x} - \frac{t}{t^2+1}\right).
$$

when <sup>λ</sup> <sup>→</sup> <sup>x</sup>, <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and the functions are uniformly bounded in <sup>t</sup>. Hence, (A.3.26) and (A.4.18) imply (A.4.17), where the integral in (A.4.17) converges in the strong topology by Proposition A.3.7. Furthermore, the bounded operators F(x), x ∈ (c, d), are self-adjoint by (A.3.24) and Proposition A.2.9 shows that <sup>x</sup> → (F(x)ϕ, ϕ) is nondecreasing on (c, d). -

If <sup>F</sup> has an analytic continuation to (c, d) <sup>⊂</sup> <sup>R</sup>, then it follows from (A.4.17) that F is differentiable on (c, d) and that

$$F'(x) = \beta + \int\_{\mathbb{R}} \frac{1}{|t - x|^2} \, d\Sigma(t), \quad x \in (c, d), \tag{A.4.19}$$

where the integral exists in the strong sense. In particular, one has for all ϕ ∈ G:

$$d\left(F'(x)\varphi,\varphi\right) = \left(\beta\varphi,\varphi\right) + \int\_{\mathbb{R}} \frac{1}{|t-x|^2} \, d\left(\Sigma(t)\varphi,\varphi\right), \quad x \in (c,d);$$

cf. (A.3.25). It is clear that F- (x) is a nonnegative operator in **B**(G).

The next observation on isolated singularities of <sup>F</sup> is useful. If (c, d) <sup>⊂</sup> <sup>R</sup> and F admits an analytic continuation to (c, d) \ {y} for some y ∈ (c, d), then Proposition A.4.5 implies

$$F(\lambda) = \alpha + \lambda\beta + \int\_{\mathbb{R}\backslash((c,y)\cup(y,d))} \left(\frac{1}{t-\lambda} - \frac{t}{t^2+1}\right) d\Sigma(t)$$

for all <sup>λ</sup> <sup>∈</sup> (<sup>C</sup> \ <sup>R</sup>) <sup>∪</sup> (c, y) <sup>∪</sup> (y, d). Dominated convergence shows that

$$-\lim\_{\lambda \to y} (\lambda - y)(F(\lambda)\varphi, \psi) = \left(\Sigma(y+)\varphi, \psi\right) - \left(\Sigma(y-)\varphi, \psi\right), \quad \varphi, \psi \in \mathcal{G},$$

whence

$$\lim\_{\lambda \to y} \left\| (\lambda - y) F(\lambda) \right\| = \left\| \Sigma(y +) - \Sigma(y -) \right\|.$$

The following result establishes an important property of operator-valued Nevanlinna functions which will be characteristic for the Weyl functions in this text; see Chapter 4. The proof given here depends on the integral representation of a Nevanlinna function.

**Proposition A.4.6.** Let F be a **B**(G)-valued Nevanlinna function. Then the following statements hold:


Proof. The proof will be given in three steps and involves the nonnegativity of the operators in (A.4.12) and (A.4.19).

Step 1. Assume for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> that 0 <sup>∈</sup> <sup>σ</sup>p(Im <sup>F</sup>(μ)). Then there exists a nontrivial element ϕ ∈ ker (Im F(μ)). Since β ≥ 0, it follows from (A.4.12) that

$$d(\beta \varphi, \varphi) = 0 \quad \text{and} \quad \int\_{\mathbb{R}} \frac{1}{|t - \mu|^2} \, d(\Sigma(t)\varphi, \varphi) = 0,$$

and therefore <sup>ϕ</sup> <sup>∈</sup> ker <sup>β</sup> and (Σ(t)ϕ, ϕ) = 0 for all <sup>t</sup> <sup>∈</sup> <sup>R</sup>. Hence, due to (A.4.12) one sees that (Im <sup>F</sup>(λ)ϕ, ϕ) = 0 for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. For <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> one concludes by the nonnegativity of Im <sup>F</sup>(λ) that Im <sup>F</sup>(λ)<sup>ϕ</sup> = 0. For <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>−</sup> one has that Im F(λ)ϕ = Im F(λ)ϕ = 0. Therefore, it follows that 0 ∈ σp(Im F(λ)) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Moreover, in case there is an analytic continuation to (c, d), (A.4.19) implies that (F- (x)ϕ, ϕ) = 0 for all x ∈ (c, d). For x ∈ (c, d) one now concludes by the nonnegativity of F- (x) that F- (x)ϕ = 0. Hence, it follows that 0 ∈ σp(F- (x)) for all x ∈ (c, d).

If F admits an analytic continuation to (c, d), then the above arguments show that the assumption 0 ∈ σp(F- (x0)) for some x<sup>0</sup> ∈ (c, d) leads to 0 ∈ σp(Im F(λ)) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and 0 <sup>∈</sup> <sup>σ</sup>p(F- (x)) for all x ∈ (c, d).

Step 2. Assume for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> that 0 <sup>∈</sup> <sup>σ</sup>c(Im <sup>F</sup>(μ)). Then there exists a sequence ϕ<sup>n</sup> ∈ G, ϕn = 1, such that Im F(μ)ϕ<sup>n</sup> → 0 for n → ∞. It follows from (A.4.12) that

$$d(\beta \varphi\_n, \varphi\_n) \to 0 \quad \text{and} \quad \int\_{\mathbb{R}} \frac{1}{|t - \mu|^2} \, d(\Sigma(t)\varphi\_n, \varphi\_n) \to 0, \quad n \to \infty,$$

and hence also (Im <sup>F</sup>(λ)ϕn, ϕn) <sup>→</sup> 0 for any <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Thus, for <sup>λ</sup> <sup>∈</sup> <sup>C</sup><sup>+</sup> one concludes from the nonnegativity of Im F(λ) that Im F(λ)ϕ<sup>n</sup> → 0. Hence, one concludes that 0 <sup>∈</sup> <sup>σ</sup>c(Im <sup>F</sup>(λ)) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Moreover, if there is an analytic continuation to (c, d), then it follows from (A.4.19) that

$$d\left(\beta \varphi\_n, \varphi\_n\right) \to 0 \quad \text{and} \quad \int\_{\mathbb{R}} \frac{1}{|t - x|^2} \, d(\Sigma(t)\varphi\_n, \varphi\_n) \to 0, \quad n \to \infty, 1$$

and hence also (F- (x)ϕn, ϕn) → 0 for any x ∈ (c, d). Thus, for x ∈ (c, d) one concludes by the nonnegativity of F- (x) that F- (x)ϕ<sup>n</sup> → 0. Hence, x ∈ σc(F- (x)) for all x ∈ (c, d).

Therefore, it has been shown that the assumption 0 ∈ σc(Im F(μ)) for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> leads to

$$0 \in \sigma\_c(\operatorname{Im} F(\lambda)), \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \quad \text{and} \quad 0 \in \sigma\_c(F'(x)), \quad x \in (c, d).$$

It is clear that if F admits an analytic continuation to (c, d), the above arguments show that the assumption 0 ∈ σc(F- (x0)) for some x<sup>0</sup> ∈ (c, d) leads to <sup>0</sup> <sup>∈</sup> <sup>σ</sup>c(Im <sup>F</sup>(λ)) for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and 0 <sup>∈</sup> <sup>σ</sup>c(F- (x)) for all x ∈ (c, d).

Step 3. Now assume that Im <sup>F</sup>(μ) is boundedly invertible for some <sup>μ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> or, if there is an analytic continuation to (c, d), that F- (x0) is boundedly invertible for some x<sup>0</sup> ∈ (c, d), in other words, 0 ∈ ρ(Im F(μ)) or 0 ∈ ρ(F- (x0)). Since Im F(λ), <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>, and <sup>F</sup>- (x), x ∈ (c, d), are self-adjoint operators in **B**(G), it follows that σr(Im F(λ)) = ∅ and σr(F- (x)) = ∅. The assertions (i) and (ii) of the proposition now follow from Step 1 and Step 2. -

Next the notion of a uniformly strict Nevanlinna function will be defined. It can be used in conjunction with Proposition A.4.6.

**Definition A.4.7.** Let <sup>G</sup> be a Hilbert space and let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be a Nevanlinna function. The function F is said to be uniformly strict if its imaginary part Im <sup>F</sup>(λ) is boundedly invertible for some, and hence for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>.

In this context it is helpful to recall the following facts. Let A ∈ **B**(G) with Im A ≥ 0. Then clearly Im A is boundedly invertible if and only if Im A ≥ ε for some ε > 0. Note that if Im A ≥ ε for some ε > 0, then

$$\|\varepsilon\|\|\varphi\|\|^2 \le (\text{Im}\,A\,\varphi,\varphi) \le |(A\varphi,\varphi)| \le \|A\varphi\|\|\varphi\|, \quad \varphi \in \mathcal{G},$$

yields that ε ϕ ≤ Aϕ , ϕ ∈ G. Therefore, if Im A is boundedly invertible, then so is <sup>A</sup> itself, i.e., <sup>A</sup>−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G), and furthermore

$$\operatorname{Im}\left(-A^{-1}\right) = A^{-1}(\operatorname{Im}A)A^{-\*},$$

so that Im (−A−1) <sup>≥</sup> 0, and in fact Im (−A−1) <sup>≥</sup> <sup>ε</sup> for some ε- > 0. The next lemma is now clear.

**Lemma A.4.8.** Let <sup>G</sup> be a Hilbert space and let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be a Nevanlinna function. If the function <sup>F</sup> is uniformly strict, then its inverse <sup>−</sup><sup>F</sup> <sup>−</sup><sup>1</sup> is a uniformly strict **B**(G)-valued Nevanlinna function and

$$\operatorname{Im}\left(-F(\lambda)^{-1}\right) = F(\lambda)^{-1}(\operatorname{Im}F(\lambda))F(\lambda)^{-\*}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

In general, for an operator-valued Nevanlinna function F with values in **B**(G) the values of the inverse <sup>−</sup><sup>F</sup> <sup>−</sup><sup>1</sup> need not be bounded operators.

## **A.5 Kac functions**

The notion of operator-valued Nevanlinna function in Definition A.4.1 gave rise to the integral representation of Nevanlinna functions in Theorem A.4.2. Next special subclasses of Nevanlinna functions with corresponding integral representations will be considered. Whenever one deals with "unbounded measures" the interpretation of the integrals is again via Proposition A.3.7.

**Definition A.5.1.** Let <sup>G</sup> be a Hilbert space and let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be a Nevanlinna function. Then F is said to belong to the class of Kac functions if for all ϕ ∈ G

$$\int\_{1}^{\infty} \frac{\operatorname{Im} \left( F(iy)\varphi, \varphi \right)}{y} dy < \infty. \tag{A.5.1}$$

Note that every **B**(G)-valued Nevanlinna function F that satisfies

$$\sup\_{y>0} y \left( \mathrm{Im} \left( F(iy)\varphi, \varphi \right) \right) < \infty, \quad \varphi \in \mathcal{G}, \tag{A.5.2}$$

is a Kac function.

**Theorem A.5.2.** Let <sup>G</sup> be a Hilbert space and let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be an operator function. Then the following statements are equivalent:

(i) F has an integral representation of the form

$$F(\lambda) = \gamma + \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\Sigma(t), \quad \lambda \in \mathbb{C} \,\,\|\, \mathbb{R}, \tag{A.5.3}$$

with a self-adjoint operator γ ∈ **B**(G) and a nondecreasing self-adjoint operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) such that

$$\int\_{\mathbb{R}} \frac{d\Sigma(t)}{|t|+1} \in \mathbf{B}(\mathcal{G}),\tag{A.5.4}$$

where the integrals in (A.5.3) and (A.5.4) converge in the strong topology. (ii) F is a Kac function.

If either of these equivalent conditions is satisfied, then γ = limy→∞ F(iy) in the strong sense.

Proof. It is helpful to recall the identity

$$\int\_{1}^{\infty} \frac{1}{t^2 + y^2} \, dy = \frac{1}{|t|} \left( \frac{\pi}{2} - \arctan \frac{1}{|t|} \right), \quad t \neq 0,\tag{A.5.5}$$

and the fact that

$$\lim\_{|t| \to 0} \frac{1}{|t|} \left(\frac{\pi}{2} - \arctan \frac{1}{|t|}\right) = 1. \tag{A.5.6}$$

(i) ⇒ (ii) Write the representation (A.5.3) as

$$F(\lambda) = \gamma + \int\_{\mathbb{R}} g\_{\lambda}(t) \frac{d\Sigma(t)}{|t| + 1}, \quad \lambda \in \mathbb{C} \nmid \mathbb{R},$$

where the bounded continuous function <sup>g</sup><sup>λ</sup> : <sup>R</sup> <sup>→</sup> <sup>C</sup> is given by

$$g\_{\lambda}(t) = \frac{|t|+1}{t-\lambda}.$$

Due to the condition (A.5.4) it follows that F(λ) in (A.5.3) is well defined and represents an element in **B**(G); cf. Proposition A.3.7. Moreover, (A.5.3) and (A.3.25) imply that for ϕ ∈ G

$$d(F(\lambda)\varphi,\varphi) = (\gamma\varphi,\varphi) + \int\_{\mathbb{R}} g\_{\lambda}(t) \frac{d(\Sigma(t)\varphi,\varphi)}{|t|+1}, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Therefore, one sees that the function <sup>F</sup> is holomorphic on <sup>C</sup> \ <sup>R</sup>. Furthermore, it is clear that <sup>F</sup>(λ)<sup>∗</sup> <sup>=</sup> <sup>F</sup>(λ); cf. (A.3.24). Since (γϕ, ϕ) <sup>∈</sup> <sup>R</sup>, one also sees that

$$\frac{\operatorname{Im}\left(F(\lambda)\varphi,\varphi\right)}{\operatorname{Im}\lambda} = \int\_{\mathbb{R}} \frac{1}{|t-\lambda|^2} \, d(\Sigma(t)\varphi,\varphi).$$

Thus, F is a Nevanlinna function. Furthermore, integration of this identity shows, after a change of the order of integration, that

$$\int\_{1}^{\infty} \frac{\operatorname{Im} \left( F(iy)\varphi, \varphi \right)}{y} dy = \int\_{\mathbb{R}} \frac{1}{|t|} \left( \frac{\pi}{2} - \arctan \frac{1}{|t|} \right) \, d(\Sigma(t)\varphi, \varphi) < \infty.$$

Here the identity (A.5.5) was used in the first equality and (A.5.4), (A.5.6), and arctan 1/|t| → 0 for |t|→∞ were used to conclude that the last integral is finite; cf. Proposition A.3.7. This shows that F is a Kac function.

(ii) ⇒ (i) Since F is a Nevanlinna function, it follows from the integral representation (A.4.1) and (A.3.25) that for all ϕ ∈ G

$$\frac{\operatorname{Im}\left(F(iy)\varphi,\varphi\right)}{y} = \left(\beta\varphi,\varphi\right) + \int\_{\mathbb{R}} \frac{1}{t^2 + y^2} \, d(\Sigma(t)\varphi,\varphi).$$

Each of the terms on the right-hand side is nonnegative. Hence, the integrability condition (A.5.1) implies that β = 0 and furthermore

$$\int\_{1}^{\infty} \left( \int\_{\mathbb{R}} \frac{1}{t^2 + y^2} \, d(\Sigma(t)\varphi, \varphi) \right) dy < \infty.$$

Changing the order of integration and using (A.5.5) gives

$$\int\_{\mathbb{R}} \frac{1}{|t|} \left( \frac{\pi}{2} - \arctan \frac{1}{|t|} \right) \, d(\Sigma(t)\varphi, \varphi) < \infty,$$

and hence <sup>R</sup>(1 + <sup>|</sup>t|)−<sup>1</sup> <sup>d</sup>(Σ(t)ϕ, ϕ) <sup>&</sup>lt; <sup>∞</sup>. By Proposition A.3.7, this implies that (A.5.4) is satisfied. Observe that for each compact interval [a, b] <sup>⊂</sup> <sup>R</sup> one has the identity

$$
\int\_a^b \frac{1+\lambda t}{t-\lambda} \frac{d\Sigma(t)}{t^2+1} = \int\_a^b \frac{|t|+1}{t-\lambda} \frac{d\Sigma(t)}{|t|+1} - \int\_a^b \frac{|t|+1}{t^2+1} \frac{d\Sigma(t)}{|t|+1},
$$

and now all integrals have strong limits as [a, b] <sup>→</sup> <sup>R</sup> by Proposition A.3.7. Hence, one obtains from (A.4.1) with β = 0 that

$$F(\lambda) = \alpha + \int\_{\mathbb{R}} \frac{1 + \lambda t}{t - \lambda} \frac{d\Sigma(t)}{t^2 + 1} = \alpha + \int\_{\mathbb{R}} \frac{d\Sigma(t)}{t - \lambda} - \int\_{\mathbb{R}} \frac{t}{t^2 + 1} \, d\Sigma(t).$$

Observe that

$$\int\_{\mathbb{R}} \frac{t}{t^2 + 1} \, d\Sigma(t) = \int\_{\mathbb{R}} \frac{t(|t| + 1)}{t^2 + 1} \, \frac{d\Sigma(t)}{|t| + 1}$$

is a self-adjoint operator in **B**(G). Hence, with γ defined by

$$\gamma = \alpha - \int\_{\mathbb{R}} \frac{t}{t^2 + 1} \, d\Sigma(t)$$

one sees that γ ∈ **B**(G) is self-adjoint and the assertion (i) follows.

Finally, observe that by means of (A.3.26) one has

$$\lim\_{y \to \infty} (F(iy) - \gamma)\varphi = \lim\_{y \to \infty} \left( \int\_{\mathbb{R}} \frac{1}{t - iy} \, d\Sigma(t) \right) \varphi = 0$$

for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. This gives the last assertion. -

A subclass of the Kac functions in Theorem A.5.2 concerns the case with bounded "measures".

**Proposition A.5.3.** Let <sup>G</sup> be a Hilbert space and let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be an operator function. Then the following statements are equivalent:

(i) F has an integral representation of the form

$$F(\lambda) = \gamma + \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\Sigma(t), \quad \lambda \in \mathbb{C} \,\,\|\, \mathbb{R},$$

with a self-adjoint operator γ ∈ **B**(G) and a nondecreasing self-adjoint operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) such that <sup>R</sup> dΣ(t) ∈ **B**(G).

(ii) F is a Kac function satisfying (A.5.2).

If either of these equivalent conditions is satisfied, then γ = limy→∞ F(iy) in the strong sense, and furthermore, for all ϕ ∈ G

$$\int\_{\mathbb{R}} d(\Sigma(t)\varphi, \varphi) = \sup\_{y>0} y(\text{Im}\, F(iy)\varphi, \varphi) < \infty. \tag{A.5.7}$$

Proof. (i) ⇒ (ii) It follows from Theorem A.5.2 that F is a Kac function. Moreover,

$$\begin{split} \sup\_{y>0} y \{ \operatorname{Im} F(iy) \varphi, \varphi \} &= \sup\_{y>0} \int\_{\mathbb{R}} \frac{y^2}{t^2 + y^2} \, d(\Sigma(t)\varphi, \varphi) \\ &= \int\_{\mathbb{R}} d(\Sigma(t)\varphi, \varphi) - \inf\_{y>0} \int\_{\mathbb{R}} \frac{t^2}{t^2 + y^2} \, d(\Sigma(t)\varphi, \varphi) \\ &= \int\_{\mathbb{R}} d(\Sigma(t)\varphi, \varphi), \end{split} \tag{A.5.8}$$

which shows that the condition (A.5.2) is satisfied.

(ii) ⇒ (i) Since F is a Kac function, the integral representation follows from Theorem A.5.2 with a nondecreasing self-adjoint operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) which satisfies the integrability condition (A.5.4). Now it follows from the assumption (A.5.2) and (A.5.8) that <sup>R</sup> dΣ(t) ∈ **B**(G).

Moreover, the identity γ = limy→∞ F(iy) in the strong sense is clear from Theorem A.5.2 and (A.5.7) was shown in (A.5.8). -

The previous proposition has an interesting consequence for a further subclass of the Kac functions which satisfy (A.5.2). The following result is connected with the characterization of generalized resolvents; see Chapter 4, where also the Sz.-Nagy dilation theorem is treated. Note that the conditions γ = γ<sup>∗</sup> and <sup>R</sup> dΣ(t) ∈ **B**(G) in Proposition A.5.3 are now specialized to the conditions γ = 0 and <sup>R</sup> dΣ(t) ≤ 1.

**Proposition A.5.4.** Let <sup>G</sup> be a Hilbert space and let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be an operator function. Then the following statements are equivalent:

(i) F has an integral representation of the form

$$F(\lambda) = \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\Sigma(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

with a nondecreasing self-adjoint operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) such that <sup>R</sup> dΣ(t) ∈ **B**(G) and <sup>R</sup> dΣ(t) ≤ 1.

(ii) F is a Nevanlinna function which satisfies

$$\frac{\operatorname{Im} F(\lambda)}{\operatorname{Im} \lambda} - F(\lambda)^\* F(\lambda) \ge 0, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Proof. (i) ⇒ (ii) It is clear from Proposition A.5.3 that F is a Kac function and hence a Nevanlinna function. Moreover, for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> one has

$$\frac{(\operatorname{Im} F(\lambda)\varphi, \varphi)}{\operatorname{Im} \lambda} = \int\_{\mathbb{R}} \frac{1}{|t - \lambda|^2} \, d(\Sigma(t)\varphi, \varphi), \quad \varphi \in \mathfrak{G}.$$

Since Θ(+∞) − Θ(−∞) ≤ 1, it follows from Proposition A.3.4 that

$$\begin{aligned} \left(F(\lambda)^\* F(\lambda)\varphi, \varphi\right) &= \|F(\lambda)\varphi\|^2 = \left\| \left(\int\_{\mathbb{R}} \frac{1}{t-\lambda} d\Sigma(t)\right) \varphi \right\|^2 \\ &\le \int\_{\mathbb{R}} \frac{1}{|t-\lambda|^2} d(\Sigma(t)\varphi, \varphi), \quad \varphi \in \mathcal{G}, \end{aligned}$$

which gives the desired result.

(ii) ⇒ (i) Let F be a Nevanlinna function which satisfies

$$|\mathrm{Im}\,\lambda| \, \|F(\lambda)\varphi\|^2 \le |\mathrm{Im}\,(F(\lambda)\varphi, \varphi)|, \quad \varphi \in \mathcal{G}, \quad \lambda \in \mathbb{C} \,\,\|\, \mathbb{R}.$$

Then

$$|\mathrm{Im}\,\lambda| \, \|F(\lambda)\varphi\| \le \|\varphi\|, \quad \varphi \in \mathfrak{G}, \quad \lambda \in \mathbb{C} \; \vert \; \mathbb{R},$$

which leads to

$$|\mathrm{Im}\,\lambda| \, |(F(\lambda)\varphi,\varphi)| \le \|\varphi\|^2, \quad \varphi \in \mathfrak{G}, \quad \lambda \in \mathbb{C} \text{ } \mathbb{R}\text{-}\mathbb{C}$$

In particular, one has

$$\left| \left| \operatorname{Im} \lambda \right| \left| \operatorname{Im} \left( F(\lambda)\varphi, \varphi \right) \right| \leq \left\| \varphi \right\|^2, \quad \varphi \in \mathcal{G}, \quad \lambda \in \mathbb{C} \; \middle| \; \mathbb{R}, \tag{A.5.9}$$

and


The inequality (A.5.9) and Proposition A.5.3 imply that F admits the integral representation

$$F(\lambda) = \gamma + \int\_{\mathbb{R}} \frac{1}{t - \lambda} \, d\Sigma(t), \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

with <sup>R</sup> dΣ(t) ∈ **B**(G), and hence <sup>R</sup> d(Σ(t)ϕ, ϕ) < ∞. Moreover, it follows from (A.5.9) and (A.5.7) that

$$\int\_{\mathbb{R}} d(\Sigma(t)\varphi, \varphi) = \sup\_{y>0} y \left( \mathrm{Im} \, F(iy)\varphi, \varphi \right) \le \left\| \varphi \right\|^2.$$

Hence, one sees that <sup>R</sup> dΣ(t) ≤ 1. Finally, note that

$$\operatorname{Re}\left(F(iy)\varphi,\varphi\right) = \left(\gamma\varphi,\varphi\right) + \int\_{\mathbb{R}} \frac{t}{t^2 + y^2} \, d(\Sigma(t)\varphi,\varphi), \quad y > 0.1$$

Now (A.5.10) and the dominated convergence theorem show that γ = 0. -

## **A.6 Stieltjes and inverse Stieltjes functions**

Let F be a **B**(G)-valued Nevanlinna function F with the integral representation (A.4.1) as in Theorem A.4.2. Recall from Proposition A.4.5 that F is holomorphic on <sup>C</sup> \ [c, <sup>∞</sup>) if and only if the function <sup>t</sup> → Σ(t)<sup>ϕ</sup> is constant on (−∞, c) for all ϕ ∈ G, and in this case one has the integral representation

$$F(\lambda) = \alpha + \lambda\beta + \int\_{[c,\infty)} \left(\frac{1}{t-\lambda} - \frac{t}{t^2+1}\right) d\Sigma(t),\tag{A.6.1}$$

where the integral converges in the strong topology for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [c, <sup>∞</sup>); cf. (A.3.28). Note that (A.6.1) implies

$$
\beta \varphi = \lim\_{x \downarrow -\infty} \frac{F(x)}{x} \varphi, \qquad \varphi \in \mathfrak{G}. \tag{A.6.2}
$$

This identity can be verified in the same way as in the proof of Lemma A.2.6 using (A.3.26); cf. Lemma A.4.3. Moreover, x → F(x) is a **B**(G)-valued operator function on (−∞, c) with self-adjoint operators as values and one has

$$F(x\_1) \le F(x\_2), \quad x\_1 < x\_2 < c,\tag{A.6.3}$$

by Proposition A.4.5. In the present section this class of Nevanlinna functions F will now be further specified by requiring, in addition to holomorphy of F on <sup>C</sup> \ [c, <sup>∞</sup>), a sign condition for the values <sup>F</sup>(x) for <sup>x</sup> <sup>∈</sup> (−∞, c).

**Definition A.6.1.** Let <sup>G</sup> be a Hilbert space and let <sup>c</sup> <sup>∈</sup> <sup>R</sup> be fixed. A Nevanlinna function <sup>F</sup> : <sup>C</sup>\ [c, <sup>∞</sup>) <sup>→</sup> **<sup>B</sup>**(G) is said to belong to the class **<sup>S</sup>**G(−∞, c) of Stieltjes functions if


Similarly, a Nevanlinna function <sup>F</sup> : <sup>C</sup> \ [c, <sup>∞</sup>) <sup>→</sup> **<sup>B</sup>**(G) is said to belong to the class **S**−<sup>1</sup> <sup>G</sup> (−∞, c) of inverse Stieltjes functions if


Thus, if F ∈ **S**G(−∞, c), then x → F(x) is an operator function on (−∞, c) and

$$0 \le F(x\_1) \le F(x\_2), \quad x\_1 < x\_2 < c.$$

In particular, limx↓−∞ F(x) exists in the strong sense and defines a nonnegative self-adjoint operator in **B**(G); cf. Lemma A.3.1. There is also a limit as x ↑ c in the sense of relations, see for details Chapter 5. Similarly, one sees that if <sup>F</sup> <sup>∈</sup> **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c), then x → F(x) is an operator function on (−∞, c) and

$$F(x\_1) \le F(x\_2) \le 0, \quad x\_1 < x\_2 < c.$$

In particular, limx↑<sup>c</sup> F(x) exists in the strong sense and defines a nonnegative selfadjoint operator in **B**(G); cf. Lemma A.3.1. There is also a limit as x ↓ −∞ in the sense of relations, see for details Chapter 5.

The characterization of Nevanlinna functions in Theorem A.4.2 can be specialized for Stieltjes functions and, in fact, the following result also shows that the Stieltjes functions form a subclass of the Kac functions; cf. Theorem A.5.2.

**Theorem A.6.2.** Let <sup>G</sup> be a Hilbert space, let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be an operator function, and let <sup>c</sup> <sup>∈</sup> <sup>R</sup>. Then the following statements are equivalent:

(i) F has an integral representation of the form

$$F(\lambda) = \gamma + \int\_{\left[c,\infty\right)} \frac{1}{t - \lambda} \, d\Sigma(t), \quad \lambda \in \mathbb{C} \; | \; [c,\infty), \tag{A.6.4}$$

with a nonnegative self-adjoint operator γ ∈ **B**(G), a nondecreasing selfadjoint operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) such that <sup>t</sup> → Σ(t)ϕ, <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, is constant on (−∞, c), and

$$\int\_{\{c,\infty\}} \frac{d(\Sigma(t)\varphi,\varphi)}{|t|+1} < \infty, \qquad \varphi \in \mathfrak{G},\tag{A.6.5}$$

where the integral in the representation (A.6.4) is interpreted in the weak topology for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [c, <sup>∞</sup>).

(ii) F is a Stieltjes function in **S**G(−∞, c).

Proof. (i) ⇒ (ii) Assume that the function F has the integral representation (A.6.4) with <sup>γ</sup> <sup>∈</sup> **<sup>B</sup>**(G) and Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) as above, such that (A.6.5) holds. The condition (A.6.5) ensures that the integral in (A.6.4) is well defined in the weak sense. From the integral representation (A.6.4) one sees that F is holomorphic on <sup>C</sup> \ [c, <sup>∞</sup>),

$$\frac{\left(\operatorname{Im} F(\lambda)\varphi,\varphi\right)}{\operatorname{Im} \lambda} = \int\_{\left[c,\infty\right)} \frac{1}{|t-\lambda|^2} \, d(\Sigma(t)\varphi,\varphi) \ge 0, \quad \lambda \in \mathbb{C} \backslash \mathbb{R}, \quad \varphi \in \mathcal{G},$$

and <sup>F</sup>(λ)<sup>∗</sup> <sup>=</sup> <sup>F</sup>(λ) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Hence, <sup>F</sup> is a **<sup>B</sup>**(G)-valued Nevanlinna function. Moreover, γ ≥ 0 and

$$d\left(F(x)\varphi,\varphi\right) = \left(\gamma\varphi,\varphi\right) + \int\_{\left[c,\infty\right)} \frac{1}{t-x} \, d\left(\Sigma(t)\varphi,\varphi\right), \quad x \in \left(-\infty,c\right), \quad \varphi \in \mathcal{G},$$

imply F(x) ≥ 0 for all x<c. Thus, F ∈ **S**G(−∞, c).

(ii) ⇒ (i) Assume that F is a Nevanlinna function in **S**G(−∞, c). Then F has the integral representation (A.6.1), the operators F(x) defined for −∞ <x<c satisfy (A.6.3), and they are all nonnegative. Hence, the limit

$$\gamma = \lim\_{x \downarrow -\infty} F(x) \in \mathbf{B}(\mathfrak{G}) \tag{A.6.6}$$

exists in the strong sense and one has γ ≥ 0 by Lemma A.3.1. Note that (A.6.2) implies β = 0 in (A.6.1). Therefore, (A.6.1) implies that for all x<c:

$$-\int\_{[c,\infty)} \frac{1+tx}{t-x} \frac{d(\Sigma(t)\varphi,\varphi)}{t^2+1} \le (\alpha\varphi,\varphi), \quad \varphi \in \mathcal{G}.$$

Letting x ↓ −∞ and using the monotone convergence theorem one obtains

$$\int\_{[c,\infty)} \frac{t}{t^2 + 1} \, d(\Sigma(t)\varphi, \varphi) \le (\alpha \varphi, \varphi), \quad \varphi \in \mathcal{G}. \tag{A.6.7}$$

Since

$$\frac{d(\Sigma(t)\varphi,\varphi)}{|t|+1} = \frac{t^2+1}{t(|t|+1)}\frac{t}{t^2+1}d(\Sigma(t)\varphi,\varphi)$$

for large t, one concludes from (A.6.7) that (A.6.5) holds. Moreover, one obtains from (A.6.6) and (A.6.1) that

$$\gamma = \alpha - \int\_{[c,\infty)} \frac{t}{t^2 + 1} \, d\Sigma(t).$$

Thus, one may rewrite the integral representation (A.6.1) in the form (A.6.4). -

The inverse Stieltjes functions can also be characterized by integral representations. Here is the analog of Theorem A.6.2.

**Theorem A.6.3.** Let <sup>G</sup> be a Hilbert space, let <sup>F</sup> : <sup>C</sup> \ <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) be an operator function, and let <sup>c</sup> <sup>∈</sup> <sup>R</sup>. Then the following statements are equivalent:

(i) F has an integral representation of the form

$$F(\lambda) = L + \beta(\lambda - c) + \int\_{[c,\infty)} \left( \frac{1}{t - \lambda} - \frac{1}{t - c} \right) \, d\Sigma(t), \quad \lambda \in \mathbb{C} \backslash [c,\infty), \text{ (A.6.8)}$$

with L, β ∈ **B**(G), L ≤ 0, β ≥ 0, a nondecreasing self-adjoint operator function Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) such that <sup>t</sup> → Σ(t)ϕ, <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, is constant on (−∞, c), and

$$\int\_{[c,\infty)} \frac{d(\Sigma(t)\varphi,\varphi)}{(t-c)(|t|+1)} < \infty, \qquad \varphi \in \mathfrak{G},\tag{A.6.9}$$

where the integral in the representation (A.6.8) is interpreted in the weak topology for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [c, <sup>∞</sup>).

(ii) F is an inverse Stieltjes function in **S**−<sup>1</sup> <sup>G</sup> (−∞, c).

Proof. (i) ⇒ (ii) Assume that the function F has the integral representation (A.6.8) with L, β <sup>∈</sup> **<sup>B</sup>**(G) and Σ : <sup>R</sup> <sup>→</sup> **<sup>B</sup>**(G) as above such that the condition (A.6.9) holds. First observe that

$$\int\_{\{c,\infty\}} \left(\frac{1}{t-\lambda} - \frac{1}{t-c}\right) d(\Sigma(t)\varphi,\varphi) = \int\_{\{c,\infty\}} \frac{(\lambda-c)(|t|+1)}{t-\lambda} \frac{d(\Sigma(t)\varphi,\varphi)}{(t-c)(|t|+1)}$$

for ϕ ∈ G, and hence condition (A.6.9) ensures that the integral in (A.6.8) converges in the weak topology for all <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [c, <sup>∞</sup>). It also follows from (A.6.8) that <sup>F</sup> is holomorphic on <sup>C</sup> \ [c, <sup>∞</sup>),

$$\frac{\left(\operatorname{Im} F(\lambda)\varphi, \varphi\right)}{\operatorname{Im} \lambda} = \left(\beta \varphi, \varphi\right) + \int\_{\left[c, \infty\right)} \frac{1}{|t - \lambda|^2} \, d(\Sigma(t)\varphi, \varphi) \ge 0$$

for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, and <sup>F</sup>(λ)<sup>∗</sup> <sup>=</sup> <sup>F</sup>(λ) for Im <sup>λ</sup> = 0. This shows that <sup>F</sup> is a **B**(G)-valued Nevanlinna function. Now if x<c, then β(x − c) ≤ 0 and the integrand in (A.6.8) is nonpositive. Since also L ≤ 0, one concludes that F(x) ≤ 0 for all x<c. Thus, <sup>F</sup> <sup>∈</sup> **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c).

(ii) <sup>⇒</sup> (i) Assume that the function <sup>F</sup> is a Nevanlinna function in **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c). Then F has the integral representation (A.6.1) and the operators F(x) defined for −∞ <x<c satisfy (A.6.3) and they are all nonpositive. Hence, the limit

$$L = \lim\_{x \uparrow c} F(x) \in \mathbf{B}(\mathcal{G})$$

exists in the strong sense and one has L ≤ 0 by Lemma A.3.1. Furthermore, for ϕ ∈ G and −∞ <x<c one has

$$d(F(x)\varphi,\varphi) = (\alpha\varphi,\varphi) + x(\beta\varphi,\varphi) + \int\_{[c,\infty)} \left(\frac{1}{t-x} - \frac{t}{t^2+1}\right) d(\Sigma(t)\varphi,\varphi),$$

so that for x ↑ c the monotone convergence theorem gives

$$\delta(L\varphi,\varphi) = (\alpha\varphi,\varphi) + c(\beta\varphi,\varphi) + \int\_{[c,\infty)} \left(\frac{1}{t-c} - \frac{t}{t^2+1}\right) d(\Sigma(t)\varphi,\varphi). \tag{A.6.10}$$

In particular, the integral on the right-hand side of (A.6.10) exists for all ϕ ∈ G. From

$$\frac{d(\Sigma(t)\varphi,\varphi)}{(t-c)(|t|+1)} = \frac{t^2+1}{(1+tc)(|t|+1)}\left(\frac{1}{t-c} - \frac{t}{t^2+1}\right)d(\Sigma(t)\varphi,\varphi)$$

one then concludes that (A.6.9) holds. Using (A.6.10) the integral representation (A.6.1) can be rewritten as

$$d\left(F(\lambda)\varphi,\varphi\right) = \left(L\varphi,\varphi\right) + (\lambda - c)(\beta\varphi,\varphi) + \int\_{\left[c,\infty\right)} \left(\frac{1}{t-\lambda} - \frac{1}{t-c}\right) \,d\left(\Sigma(t)\varphi,\varphi\right)$$

for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ [c, <sup>∞</sup>) and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. This proves (A.6.8). -

**Remark A.6.4.** The integral representations in (A.6.4) and (A.6.8) are understood in the weak sense. Using Proposition A.3.7 one can verify that the integral representation for Stieltjes functions in (A.6.4) remains valid in the strong sense and that the integrability condition (A.6.5) can be replaced by the condition

$$\int\_{[c,\infty)} \frac{d\Sigma(t)}{|t|+1} \in \mathbf{B}(\mathcal{G}),$$

where the integral exists in the strong sense. Within the theory of operator-valued integrals developed in Section A.3, the integral representation (A.6.8) for inverse Stieltjes functions and the integrability condition (A.6.9) cannot be interpreted directly in the strong sense.

Let <sup>F</sup> be a Nevanlinna function and let <sup>c</sup> <sup>∈</sup> <sup>R</sup>. Define the functions <sup>F</sup><sup>c</sup> and F − <sup>c</sup> by

$$F\_c(\lambda) = (\lambda - c)F(\lambda), \quad \lambda \in \mathbb{C} \backslash \mathbb{R},$$

and

$$F\_c^{-}(\lambda) = (\lambda - c)^{-1} F(\lambda), \quad \lambda \in \mathbb{C} \text{ } \mathbb{R}.$$

The class of Nevanlinna functions is not stable under either of the mappings

$$F \mapsto F\_c \quad \text{or} \quad F \mapsto F\_c^- \dots$$

In fact, the next results show that the set of Nevanlinna functions that is stable under these mappings coincides with **<sup>S</sup>**G(−∞, c) or **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c), respectively.

$$\mathbb{D}$$

**Proposition A.6.5.** Let G be a Hilbert space, let F be a **B**(G)-valued Nevanlinna function <sup>F</sup>, and let <sup>c</sup> <sup>∈</sup> <sup>R</sup>. Then the following statements are equivalent:


In fact, if <sup>F</sup> <sup>∈</sup> **<sup>S</sup>**G(−∞, c), then <sup>F</sup><sup>c</sup> <sup>∈</sup> **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c).

Proof. (i) ⇒ (ii) Assume that F belongs to the class **S**G(−∞, c). Then, by Theorem A.6.2, the function F has the integral representation (A.6.4) interpreted in the weak sense. This implies that F<sup>c</sup> has the representation

$$F\_c(\lambda) = \gamma(\lambda - c) + \int\_{[c,\infty)} \frac{\lambda - c}{t - \lambda} d\Sigma(t), \quad \lambda \in \mathbb{C} \ \backslash [c, \infty), \tag{A.6.11}$$

in the weak sense. It follows that <sup>F</sup><sup>c</sup> is holomorphic on <sup>C</sup> \ [c, <sup>∞</sup>),

$$\frac{\left(\operatorname{Im} F\_c(\lambda)\varphi, \varphi\right)}{\operatorname{Im} \lambda} = \left(\gamma\varphi, \varphi\right) + \int\_{\left[c,\infty\right)} \frac{t-c}{|t-\lambda|^2} \, d(\Sigma(t)\varphi, \varphi) \ge 0$$

for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, and <sup>F</sup>c(λ)<sup>∗</sup> <sup>=</sup> <sup>F</sup>c(λ) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Therefore, <sup>F</sup><sup>c</sup> is a **B**(G)-valued Nevanlinna function. It is also clear from (A.6.11) that Fc(x) ≤ 0 for x<c, and hence <sup>F</sup><sup>c</sup> <sup>∈</sup> **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c).

(ii) ⇒ (i) Assume that with F also F<sup>c</sup> is a Nevanlinna function. Let ϕ ∈ G and express the scalar Nevanlinna function fϕ(λ)=(F(λ)ϕ, ϕ) with its integral representation

$$f\_{\varphi}(\lambda) = \alpha\_{\varphi} + \beta\_{\varphi}\lambda + \int\_{\mathbb{R}} \left( \frac{1}{t - \lambda} - \frac{t}{t^2 + 1} \right) \, d\sigma\_{\varphi}(t), \quad \lambda \in \mathbb{C} \,\backslash \,\mathbb{R},$$

in Theorem A.2.5. The function <sup>g</sup>(λ) = <sup>λ</sup> <sup>−</sup> <sup>c</sup>, <sup>c</sup> <sup>∈</sup> <sup>R</sup>, is entire and real on <sup>R</sup>. According to the Stieltjes inversion formula in Lemma A.2.7, for any compact subinterval [a, b] ⊂ (−∞, c) one obtains

$$\begin{split} \lim\_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int\_{a}^{b} \left[ (gf\_{\varphi})(s + i\varepsilon) - (gf\_{\varphi})(s - i\varepsilon) \right] ds \\ = \frac{1}{2} \int\_{\{a\}} (t - c) \, d\sigma\_{\varphi}(t) + \int\_{a+}^{b-} (t - c) \, d\sigma\_{\varphi}(t) + \frac{1}{2} \int\_{\{b\}} (t - c) \, d\sigma\_{\varphi}(t) . \end{split} \tag{A.6.12}$$

Since the function g(λ)fϕ(λ)=(Fc(λ)ϕ, ϕ) is also a Nevanlinna function, the limit in (A.6.12) is nonnegative. However, since t − c < 0 for all t ∈ [a, b] one concludes that the right-hand side in (A.6.12) is nonpositive and consequently σ(t) must take a constant value on the whole interval [a, b]. Proposition A.2.9 implies that <sup>f</sup><sup>ϕ</sup> is holomorphic on <sup>C</sup> \ [c, <sup>∞</sup>) for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>. This shows that <sup>F</sup> is holomorphic on <sup>C</sup> \ [c, <sup>∞</sup>).

Finally, to see that F takes nonnegative values on the interval (−∞, c) Proposition A.2.9 will be applied. Again consider the two functions fϕ(λ) and (λ − c)fϕ(λ). Both of them are differentiable and nondecreasing on the interval (−∞, c). In particular, for all x<c one has f- <sup>ϕ</sup>(x) ≥ 0 and

$$f\_{\varphi}(x) + (x - c)f\_{\varphi}'(x) = \frac{d}{dx} \left( (x - c)f\_{\varphi}(x) \right) \ge 0.1$$

Since x<c, this implies that

$$f\_{\varphi}(x) \ge (c - x)f'\_{\varphi}(x) \ge 0,$$

and hence (F(x)ϕ, ϕ) ≥ 0 for all ϕ ∈ G. Therefore, F(x) ≥ 0 for all x<c and the claim <sup>F</sup> <sup>∈</sup> **<sup>S</sup>**G(−∞, c) is proved. -

The next proposition is a variant of Proposition A.6.5 for inverse Stieltjes functions.

**Proposition A.6.6.** Let G be a Hilbert space, let F be a **B**(G)-valued Nevanlinna function <sup>F</sup>, and let <sup>c</sup> <sup>∈</sup> <sup>R</sup>. Then the following statements are equivalent:


In fact, if <sup>F</sup> <sup>∈</sup> **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c), then F <sup>−</sup> <sup>c</sup> ∈ **S**G(−∞, c).

Proof. (i) <sup>⇒</sup> (ii) Assume that <sup>F</sup> belongs to the class **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c). Then, by Theorem A.6.3, the function F has the integral representation (A.6.8) interpreted in the weak sense. This implies that F <sup>−</sup> <sup>c</sup> has the representation

$$F\_c^-\left(\lambda\right) = \frac{-L}{c-\lambda} + \beta + \int\_{\left(c,\infty\right)} \frac{d\Sigma(t)}{(t-c)(t-\lambda)}, \quad \lambda \in \mathbb{C} \ \backslash \left[c,\infty\right), \tag{A.6.13}$$

in the weak sense. It follows that F <sup>−</sup> <sup>c</sup> is holomorphic on <sup>C</sup> \ [c, <sup>∞</sup>),

$$\frac{\left(\operatorname{Im} F\_c^{-}(\lambda)\varphi,\varphi\right)}{\operatorname{Im}\lambda} = \frac{(-L\varphi,\varphi)}{|c-\lambda|^2} + \int\_{[c,\infty)} \frac{1}{(t-c)|t-\lambda|^2} \, d(\Sigma(t)\varphi,\varphi) \ge 0$$

for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup> and <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, and <sup>F</sup> <sup>−</sup> <sup>c</sup> (λ)<sup>∗</sup> = F <sup>−</sup> <sup>c</sup> (λ) for <sup>λ</sup> <sup>∈</sup> <sup>C</sup> \ <sup>R</sup>. Therefore, <sup>F</sup> <sup>−</sup> <sup>c</sup> is a **B**(G)-valued Nevanlinna function. It is also clear from (A.6.13) that F <sup>−</sup> <sup>c</sup> (x) ≥ 0 for x<c, and hence F <sup>−</sup> <sup>c</sup> ∈ **S**G(−∞, c).

(ii) ⇒ (i) Assume that with F also F <sup>−</sup> <sup>c</sup> is a Nevanlinna function. Let ϕ ∈ G and express the scalar Nevanlinna function fϕ(λ)=(F(λ)ϕ, ϕ) with its integral representation in Theorem A.2.5. Since the function <sup>g</sup>(λ)=(<sup>λ</sup> <sup>−</sup> <sup>c</sup>)−<sup>1</sup> is holomorphic when <sup>λ</sup> <sup>=</sup> <sup>c</sup> and real on <sup>R</sup> \ {c}, the same argument as in the proof of Theorem A.6.5 shows that <sup>f</sup><sup>ϕ</sup> is holomorphic on <sup>C</sup> \ [c, <sup>∞</sup>) for all <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, and hence <sup>F</sup> is holomorphic on <sup>C</sup> \ [c, <sup>∞</sup>).

To see that F takes nonnegative values on the interval (−∞, c) one applies Proposition A.2.9. Consider the functions <sup>f</sup>ϕ(λ) and (λ−c)−<sup>1</sup>fϕ(λ). Both of them are differentiable and nondecreasing on the interval (−∞, c). In particular, for all x<c one has f- <sup>ϕ</sup>(x) ≥ 0 and

$$\frac{f\_{\varphi}'(x)}{x-c} - \frac{f\_{\varphi}(x)}{(x-c)^2} = \frac{d}{dx} \left( (x-c)^{-1} f\_{\varphi}(x) \right) \ge 0.$$

Since x<c, this implies that

$$f\_{\varphi}(x) \le (x - c)f'\_{\varphi}(x) \le 0$$

and hence (F(x)ϕ, ϕ) ≤ 0 for all ϕ ∈ G. Therefore, F(x) ≤ 0 for all x<c and the claim <sup>F</sup> <sup>∈</sup> **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c) is proved. -

The classes of Stieltjes functions and inverse Stieltjes functions are also connected to each other by inversion. For an operator-valued Nevanlinna function F with values in **<sup>B</sup>**(G) the values of the inverse <sup>−</sup><sup>F</sup> <sup>−</sup><sup>1</sup> need not be bounded operators. For simplicity, it is assumed here that the relevant functions are uniformly strict, see Definition A.4.7. Recall that if F is a uniformly strict **B**(G)-valued Nevanlinna function, then so is the function <sup>−</sup><sup>F</sup> <sup>−</sup>1; cf. Lemma A.4.8.

**Proposition A.6.7.** Let G be a Hilbert space, let F be a **B**(G)-valued Nevanlinna function, and assume that F is uniformly strict. Then the following statements hold:


Proof. (i) Let F ∈ **S**G(−∞, c) and note first that in the integral representation (A.6.1) one has β = 0, as follows from (A.6.2); cf. the proof of Theorem A.6.2. Hence, one concludes from (A.6.1) that

$$\operatorname{Im} F(i) = \int\_{[c,\infty)} \frac{d\Sigma(t)}{t^2 + 1},$$

which by assumption is a nonnegative boundedly invertible operator. Recall that F has the integral representation (A.6.4) with the same Σ as in (A.6.1). It is straightforward to see that for every x<c there exists a constant C<sup>x</sup> > 0 such that

$$\frac{t-x}{t^2+1} \le C\_x \quad \text{for all} \quad t \ge c.$$

Thus, one concludes that

$$\begin{aligned} (F(x)\varphi,\varphi) &= (\gamma\varphi,\varphi) + \int\_{[c,\infty)} \frac{1}{t-x} \, d(\Sigma(t)\varphi,\varphi) \\ &\geq (\gamma\varphi,\varphi) + \frac{1}{C\_x} \int\_{[c,\infty)} \frac{d(\Sigma(t)\varphi,\varphi)}{t^2+1} \end{aligned}$$

for all ϕ ∈ G. It follows that (F(x)ϕ, ϕ) ≥ δ<sup>x</sup> ϕ <sup>2</sup>, <sup>ϕ</sup> <sup>∈</sup> <sup>G</sup>, for some <sup>δ</sup><sup>x</sup> <sup>&</sup>gt; 0. Therefore, <sup>F</sup>(x)−<sup>1</sup> <sup>∈</sup> **<sup>B</sup>**(G) for all x<c. Thus, the function <sup>−</sup><sup>F</sup> <sup>−</sup><sup>1</sup> is holomorphic in the region <sup>C</sup> \ [c, <sup>∞</sup>). Since <sup>F</sup>(x) <sup>≥</sup> 0, it is clear that <sup>−</sup><sup>F</sup> <sup>−</sup><sup>1</sup>(x) <sup>≤</sup> 0 for all x<c. Therefore, <sup>−</sup><sup>F</sup> <sup>−</sup><sup>1</sup> <sup>∈</sup> **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c).

(ii) Let <sup>F</sup> <sup>∈</sup> **<sup>S</sup>**−<sup>1</sup> <sup>G</sup> (−∞, c). Then it follows from Theorem A.6.3 that for all x<c

$$F(x) = L + (x - c) \left( \beta + \int\_{[c,\infty)} \frac{d\Sigma(t)}{(t - x)(t - c)} \right) \le 0,\tag{A.6.14}$$

where the integral is interpreted in the weak sense. Since F is uniformly strict, Proposition A.4.6 asserts that for all x<c

$$F'(x) = \beta + \int\_{[c,\infty)} \frac{d\Sigma(t)}{(t-x)^2}$$

is a nonnegative boundedly invertible operator. It follows that

$$d\left(\beta \varphi, \varphi\right) + \int\_{\left[c,\infty\right)} \frac{d(\Sigma(t)\varphi, \varphi)}{(t-x)(t-c)} \ge \left(\beta \varphi, \varphi\right) + \int\_{\left[c,\infty\right)} \frac{d(\Sigma(t)\varphi, \varphi)}{(t-x)^2} \ge \delta\_x \left\|\varphi\right\|^2$$

for all ϕ ∈ G and for some δ<sup>x</sup> > 0, x<c. This implies that F(x) in (A.6.14) has a bounded inverse for every x<c. Thus, <sup>−</sup><sup>F</sup> <sup>−</sup><sup>1</sup> is holomorphic in the region <sup>C</sup> \ [c, <sup>∞</sup>). Since <sup>F</sup>(x) <sup>≤</sup> 0, it is clear that 0 ≤ −F(x)−<sup>1</sup> for all x<c. Therefore, <sup>−</sup><sup>F</sup> <sup>−</sup><sup>1</sup> <sup>∈</sup> **<sup>S</sup>**G(−∞, c). -

## **Appendix B**

## **Self-adjoint Operators and Fourier Transforms**

Let A be a self-adjoint operator in the Hilbert space H and let E(·) be the corresponding spectral measure. The operator A will be diagonalized by means of a self-adjoint operator Q in a Hilbert space L<sup>2</sup> dρ(R), where ρ is a nondecreasing function on R. Here Q stands for multiplication by the independent variable in L2 dρ(R). By means of the spectral measure one constructs an integral transform F which maps H unitarily onto L<sup>2</sup> dρ(R) such that A = F∗QF. This transform shares the diagonalization property with the classical Fourier transform and hence, for convenience, it will be referred to as Fourier transform in the following. There are two cases of interest in the present text.

The first (scalar) case is where H = L<sup>2</sup> <sup>r</sup>(a, b) and r is a locally integrable function that is positive almost everywhere, and where A is a self-adjoint operator in H with spectral measure E(·). The Fourier transform is initially defined on the compactly supported scalar functions in L<sup>2</sup> <sup>r</sup>(a, b) by

$$
\widehat{f}(x) = \int\_a^b \omega(t, x) f(t) \, r(t) \, dt,
$$

where (t, x) → <sup>ω</sup>(t, x) is a continuous real function defined on the square (a, b)×R. The spectral measure of A and the Fourier transform are assumed to be connected by

$$\mathbb{E}(E(\delta)f,f) = \int\_{\delta} \widehat{f}(x) \overline{\widehat{f}(x)} \, d\rho(x), \quad \delta \subset \mathbb{R},\tag{B.0.1}$$

for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with compact support. Due to this condition the Fourier transform can be extended to an isometry on all of L<sup>2</sup> <sup>r</sup>(a, b). Under an additional assumption on ω it is shown that the Fourier transform is a unitary map.

The second (vector) case is where H = L<sup>2</sup> <sup>Δ</sup>(a, b) and Δ is a nonnegative 2 × 2 matrix function with locally integrable coefficients and where A is a self-adjoint

<sup>©</sup> The Editor(s) (if applicable) and The Author(s) 2020

J. Behrndt et al., *Boundary Value Problems, Weyl Functions, and Differential Operators*, Monographs in Mathematics 108, https://doi.org/10.1007/978-3-030-36714-5

relation. Then Aop is a self-adjoint operator in Hmul A with the spectral measure E(·). The Fourier transform is initially defined on the compactly supported 2 × 1 vector functions in L<sup>2</sup> <sup>Δ</sup>(a, b) by

$$
\widehat{f}(x) = \int\_a^b \omega(t, x) \Delta(t) f(t) \, dt,
$$

where (t, x) → ω(t, x) is a continuous 1 × 2 matrix function defined on the square (a, b) <sup>×</sup> <sup>R</sup>. The spectral measure and the Fourier transform are assumed to be connected by (B.0.1) for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) with compact support. Due to this condition, the Fourier transform can be extended to a partial isometry on all of L<sup>2</sup> <sup>Δ</sup>(a, b). Under an additional assumption on ω it is shown that the Fourier transform is onto.

For the scalar case the theory will be developed in detail; the results in the vector case will be only briefly explained, as the details are very much the same.

## **B.1 The scalar case**

Let (a, b) be an open interval and let r be a locally integrable function that is positive almost everywhere. Let A be a self-adjoint operator in the Hilbert space L2 <sup>r</sup>(a, b) and let E(·) be the spectral measure of A. The basic ingredient is a continuous real function (t, x) → <sup>ω</sup>(t, x) on (a, b)×R. Hence, the Fourier transform f of a compactly supported function <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) defined by

$$
\widehat{f}(x) = \int\_a^b \omega(t, x) f(t) r(t) \, dt, \quad x \in \mathbb{R}, \tag{B.1.1}
$$

is a well-defined complex function which is continuous on R. The main assumption is that there exists a nondecreasing function ρ on R such the identity

$$\widehat{f}(E(\delta)f,g) = \int\_{\delta} \widehat{f}(x)\overline{\widehat{g}(x)}\,d\rho(x), \quad \delta \subset \mathbb{R},\tag{B.1.2}$$

holds for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with compact support and all bounded open intervals <sup>δ</sup> <sup>⊂</sup> <sup>R</sup>, whose endpoints are not eigenvalues of <sup>A</sup>. Since the eigenvalues of <sup>A</sup> are discrete, an approximation argument and the dominated convergence theorem show that the identity (B.1.2) is in fact true for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with compact support and all bounded open intervals <sup>δ</sup> <sup>⊂</sup> <sup>R</sup>, regardless of whether their endpoints being eigenvalues or not.

It follows from the assumption (B.1.2) that for every function <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with compact support the continuous function f belongs to <sup>L</sup><sup>2</sup> dρ(R). In fact, the following result is valid.

**Lemma B.1.1.** The Fourier transform f → f in (B.1.1) extends by continuity from the compactly supported functions in L<sup>2</sup> <sup>r</sup>(a, b) to an isometric mapping

$$\mathcal{F}: L^2\_r(a, b) \to L^2\_{d\rho}(\mathbb{R})$$

such that for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b)

$$\lim\_{\alpha \to a, \beta \to b} \int\_{\mathbb{R}} \left| (\mathcal{F}f)(x) - \int\_{\alpha}^{\beta} \omega(t, x) f(t) r(t) \, dt \right|^2 \, d\rho(x) = 0. \tag{B.1.3}$$

The Fourier transform and the spectral measure E(·) are related via

$$(E(\delta)f, g) = \int\_{\delta} (\mathcal{F}f)(x) \overline{(\mathcal{F}g)(x)} \, d\rho(x) \tag{B.1.4}$$

for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b), where <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> is any bounded open interval.

Proof. Step 1. The mapping f → f is a contraction on the functions in <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) that have compact support. To see this, let <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) have compact support. Then it follows from the assumption (B.1.2) that

$$\int\_{\delta} \widehat{f}(x) \overline{\widehat{f}(x)} \, d\rho(x) = (E(\delta)f, f) \le (f, f),$$

where δ is an arbitrary bounded open interval. The monotone convergence theorem shows that f belongs to <sup>L</sup><sup>2</sup> dρ(R) and

$$(\widehat{f}, \widehat{f})\_\rho \le (f, f),$$

when <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) has compact support.

Step 2. The mapping f → f , defined on the functions in <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) that have compact support, can be extended as a contraction to all of L<sup>2</sup> <sup>r</sup>(a, b). For this, let <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and approximate f in L<sup>2</sup> <sup>r</sup>(a, b) by functions <sup>f</sup><sup>n</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with compact support. Then <sup>f</sup><sup>n</sup> <sup>−</sup> <sup>f</sup><sup>m</sup> is a Cauchy sequence in <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and since

$$\|\widehat{f\_n} - \widehat{f\_m}\|\_{\rho} \le \|f\_n - f\_m\|,$$

there is an element <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) such that f <sup>n</sup> <sup>→</sup> <sup>ϕ</sup> in <sup>L</sup><sup>2</sup> dρ(R). It follows from f n ρ ≤ fn that, in fact,

 ϕ ρ ≤ f .

In particular, the mapping <sup>f</sup> → <sup>ϕ</sup> from <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) to L<sup>2</sup> dρ(R) is a contraction. Hence, the operator F given by Ff = ϕ is well defined and takes L<sup>2</sup> <sup>r</sup>(a, b) contractively to L<sup>2</sup> dρ(R). The assertion in (B.1.3) is clear by multiplying <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) with appropriately chosen characteristic functions.

Step 3. The operator <sup>F</sup> is an isometry. To see this, let f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, b) and approximate them by compactly supported functions <sup>f</sup>n, g<sup>n</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). Then by the assumption (B.1.2) one obtains

$$(E(\delta)f\_n, g\_n) = \int\_{\delta} (\mathcal{F}f\_n)(x) \overline{(\mathcal{F}g\_n)(x)} \, d\rho(x). \tag{B.1.5}$$

Taking limits in (B.1.5) yields (B.1.4). It follows from (B.1.4) and the dominated convergence theorem that

$$(f,g) = \int\_{\delta} (\mathcal{F}f)(x) \overline{(\mathcal{F}g)(x)} \, d\rho(x).$$

holds for all functions f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). Therefore, F is an isometry. -

The following simple observation is a direct consequence of Lemma B.1.1.

**Corollary B.1.2.** For any bounded open interval <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> one has

$$
\mathcal{F}(E(\delta)f) = \chi\_{\delta} \mathcal{F}f, \quad f \in L^2\_r(a,b).
$$

Proof. It follows from (B.1.4) that

$$(E(\delta)f, g) = \int\_{\delta} (\mathcal{F}f)(x) \overline{(\mathcal{F}g)(x)} d\rho(x) = (\chi\_{\delta} \mathcal{F}f, \mathcal{F}g)\_{\rho}$$

for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). Consequently,

$$\begin{aligned} \|E(\delta)f - g\|^2 &= (E(\delta)f - g, E(\delta)f - g) \\ &= (E(\delta)f, f) - (E(\delta)f, g) - (g, E(\delta)f) + (g, g) \\ &= (\chi\_\delta \mathcal{F}f, \mathcal{F}f)\_\rho - (\chi\_\delta \mathcal{F}f, \mathcal{F}g)\_\rho \\ &\quad - (\mathcal{F}g, \chi\_\delta \mathcal{F}f)\_\rho + (\mathcal{F}g, \mathcal{F}g)\_\rho \\ &= \|\chi\_\delta \mathcal{F}f - \mathcal{F}g\|^2\_\rho. \end{aligned}$$

Setting g = E(δ)f, the desired result follows. -

Let <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> be a bounded open interval. The identity (B.1.4) is now written in the equivalent form

$$\int\_{\mathbb{R}} \chi\_{\delta}(t) d(E(t)f, g) = \int\_{\mathbb{R}} \chi\_{\delta}(x) (\mathcal{F}f)(x) \overline{(\mathcal{F}g)(x)} \, d\rho(x)$$

for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). Let Ξ be a bounded Borel measurable function on R. An approximation argument involving characteristic functions shows that then also

$$\int\_{\mathbb{R}} \Xi(t) d(E(t)f, g) = \int\_{\mathbb{R}} \Xi(x) (\mathcal{F}f)(x) \overline{(\mathcal{F}g)(x)} \, d\rho(x) \tag{B.1.6}$$

for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). As a particular case of (B.1.6) one has

$$\left( (A - \lambda)^{-1} f, g \right) = \int\_{\mathbb{R}} \frac{\mathcal{F}f(x)\overline{\mathcal{F}g(x)}}{x - \lambda} d\rho(x) \tag{B.1.7}$$

for f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and <sup>λ</sup> <sup>∈</sup> <sup>ρ</sup>(A). Moreover, if <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) has compact support one obtains

$$\begin{aligned} \int\_a^b \left( (A - \lambda)^{-1} f \right) (t) \overline{g(t)} \, r(t) \, dt &= \int\_{\mathbb{R}} \frac{\mathcal{F} f(x)}{x - \lambda} \left( \int\_a^b \omega(t, x) \overline{g(t)} \, r(t) \, dt \right) \, d\rho(x) \\ &= \int\_a^b \left( \int\_{\mathbb{R}} \frac{\mathcal{F} f(x)}{x - \lambda} \omega(t, x) \, d\rho(x) \right) \overline{g(t)} \, r(t) \, dt . \end{aligned}$$

The Fubini theorem now implies that

$$\left( (A - \lambda)^{-1} f \right)(t) = \int\_{\mathbb{R}} \frac{\omega(t, x)}{x - \lambda} \, \mathcal{F}f(x) \, d\rho(x) \tag{B.1.8}$$

for almost all t ∈ (a, b) and, in particular, the integrand on the right-hand side is integrable for almost all t ∈ (a, b).

Parallel to the Fourier transform one may also introduce a reverse Fourier transform acting on the space L<sup>2</sup> dρ(R). Let <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) have compact support and define the reverse Fourier transform ϕ˘ by

$$
\varphi(t) = \int\_{\mathbb{R}} \omega(t, x) \varphi(x) \, d\rho(x), \quad t \in (a, b). \tag{B.1.9}
$$

Then ˘ϕ is a well-defined function which is continuous on (a, b). By means of Lemma B.1.1 one can now prove the following result.

**Lemma B.1.3.** The reverse Fourier transform ϕ → ϕ˘ in (B.1.9) extends by continuity from the compactly supported functions in L<sup>2</sup> dρ(R) to a contractive mapping G : L<sup>2</sup> dρ(R) <sup>→</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) such that for all <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R)

$$\lim\_{\eta \to \mathbb{R}} \int\_{a}^{b} \left| \mathcal{G}\varphi(t) - \int\_{\eta} \omega(t, x)\varphi(x) \, d\rho(x) \right|^{2} r(t) dt = 0. \tag{B.1.10}$$

In fact, the extension <sup>G</sup> satisfies GF<sup>f</sup> <sup>=</sup> <sup>f</sup> for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b).

Proof. Step 1. The mapping ϕ → ϕ˘ takes the compactly supported functions in L<sup>2</sup> dρ(R) contractively into L<sup>2</sup> <sup>r</sup>(a, b). To see this, let <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) have compact support and let [α, β] ⊂ (a, b) be a compact interval. Then

$$\begin{split} \int\_{\alpha}^{\beta} |\overline{\varphi}(t)|^{2} r(t) \, dt &= \int\_{\alpha}^{\beta} \overline{\varphi}(t) \left( \int\_{\mathbb{R}} \omega(t,x) \overline{\varphi(x)} \, d\rho(x) \right) r(t) \, dt \\ &= \int\_{\mathbb{R}} \left( \int\_{\alpha}^{\beta} \omega(t,x) \overline{\varphi}(t) r(t) \, dt \right) \overline{\varphi(x)} \, d\rho(x) \\ &= \int\_{\mathbb{R}} (\mathcal{F}(\chi\_{[\alpha,\beta]} \psi))(x) \overline{\varphi(x)} \, d\rho(x) \\ &\leq \|\mathcal{F}(\chi\_{[\alpha,\beta]} \psi)\|\_{\rho} \|\varphi\|\_{\rho} = \|\chi\_{[\alpha,\beta]} \psi\| \|\varphi\|\_{\rho}, \end{split}$$

where Lemma B.1.1 was used in the last step. This estimate gives that for any compact interval [α, β] ⊂ (a, b),

$$\sqrt{\int\_{\alpha}^{\beta} |\wp(t)|^2 r(t) \, dt} \le \|\varphi\|\_{\rho}.$$

By the monotone convergence theorem this leads to the inequality

 ϕ˘ ≤ ϕ ρ.

Hence, the mapping <sup>ϕ</sup> → <sup>ϕ</sup>˘ takes the compactly supported functions in <sup>L</sup><sup>2</sup> dρ(R) contractively into L<sup>2</sup> <sup>r</sup>(a, b).

Step 2. The mapping <sup>ϕ</sup> → <sup>ϕ</sup>˘ defined on the functions in <sup>L</sup><sup>2</sup> dρ(R) that have compact support can be contractively extended to all of L<sup>2</sup> dρ(R). For this, let <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) and approximate ϕ in L<sup>2</sup> dρ(R) by functions <sup>ϕ</sup><sup>n</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) with compact support. Then <sup>ϕ</sup><sup>n</sup> <sup>−</sup> <sup>ϕ</sup><sup>m</sup> is a Cauchy sequence in <sup>L</sup><sup>2</sup> dρ(R) and since

$$\|\not\varphi\_n - \not\varphi\_m\| \le \|\varphi\_n - \varphi\_m\|\_\rho,$$

there is an element <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) such that ˘ϕ<sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). It follows from ϕ˘n ≤ ϕn <sup>ρ</sup> that, in fact,

$$\|f\| \le \|\varphi\|\|\_{\rho}.$$

In particular, the mapping <sup>ϕ</sup> → <sup>f</sup> from <sup>L</sup><sup>2</sup> dρ(R) to L<sup>2</sup> <sup>r</sup>(a, b) is a contraction. Hence, the operator G given by Gϕ = f is well defined and takes L<sup>2</sup> dρ(R) contractively to L2 <sup>r</sup>(a, b). The assertion in (B.1.10) is clear by multiplying <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) by appropriately chosen characteristic functions.

Step 3. The extended mapping G is a left inverse of the Fourier transform F. For this, observe that

$$(f,g) = \int\_{\mathbb{R}} (\mathcal{F}f)(x) \overline{(\mathcal{F}g)(x)} \, d\rho(x)$$

for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b), since F is isometric by Lemma B.1.1. Thus, by means of the Fubini theorem one concludes that for f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and g having compact support,

$$\begin{aligned} (f,g) &= \lim\_{\eta \to \mathbb{R}} \int\_{\eta} (\mathcal{F}f)(x) \left( \int\_{a}^{b} \omega(t,x) \overline{g(t)} r(t) dt \right) d\rho(x) \\ &= \lim\_{\eta \to \mathbb{R}} \int\_{a}^{b} \left( \int\_{\eta} \omega(t,x) (\mathcal{F}f)(x) \, d\rho(x) \right) \overline{g(t)} r(t) dt \\ &= (\mathcal{G}\mathcal{F}f,g), \end{aligned}$$

where, in the last step, (B.1.10) and the continuity of the inner product have been used. Since the functions with compact support are dense in L<sup>2</sup> <sup>r</sup>(a, b), one obtains GF<sup>f</sup> <sup>=</sup> <sup>f</sup> for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b). -

Recall that multiplication by the independent variable in the Hilbert space L2 dρ(R) generates a self-adjoint operator Q whose resolvent is given by

$$(Q - \lambda)^{-1} = \frac{1}{x - \lambda}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Hence, by (B.1.7), one sees that

$$\left( \left( A - \lambda \right)^{-1} f, g \right) = \left( \left( Q - \lambda \right)^{-1} \mathcal{F} f, \mathcal{F} g \right)\_{\rho} = \left( \mathcal{F}^\* \left( Q - \lambda \right)^{-1} \mathcal{F} f, g \right)\_{\rho}$$

for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b), which leads to

$$(A - \lambda)^{-1} = \mathcal{F}^\*(Q - \lambda)^{-1}\mathcal{F}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{B.1.11}$$

In general, here the Fourier transform F is an isometry, which is not necessarily onto.

The above results have been proved under the assumption that ω is a continuous function on (a, b) <sup>×</sup> <sup>R</sup> which satisfies (B.1.2). For the following theorem one needs the additional condition:

$$\begin{array}{c} \text{for each } x\_0 \in \mathbb{R} \text{ there exists a compactly} \\ \text{supported function } f \in L\_r^2(a, b) \text{ such that } (\mathcal{F}f)(x\_0) \neq 0. \end{array} \tag{B.1.12}$$

In the present situation this condition is equivalent to the following:

for each <sup>x</sup><sup>0</sup> <sup>∈</sup> (a, b) there exists <sup>t</sup><sup>0</sup> <sup>∈</sup> <sup>R</sup> such that <sup>ω</sup>(t0, x0) = 0. (B.1.13)

**Theorem B.1.4.** Let <sup>ω</sup> be continuous on (a, b)×<sup>R</sup> and assume that (B.1.2) and one of the equivalent conditions (B.1.12) or (B.1.13) are satisfied. Then the Fourier transform

$$f \mapsto \widehat{f}, \quad \widehat{f}(x) = \int\_{a}^{b} \omega(t, x) f(t) \, r(t) \, dt, \quad x \in \mathbb{R},$$

extends by continuity from the compactly supported functions <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) to a unitary mapping F : L<sup>2</sup> <sup>r</sup>(a, b) <sup>→</sup> <sup>L</sup><sup>2</sup> dρ(R). Moreover, the self-adjoint operator A in L2 <sup>r</sup>(a, b) is unitarily equivalent to the multiplication operator Q by the independent variable in L<sup>2</sup> dρ(R) via the Fourier transform F:

$$A = \mathcal{F}^\* Q \mathcal{F}. \tag{B.1.14}$$

Proof. It is clear from Lemma B.1.1 that F : L<sup>2</sup> <sup>r</sup>(a, b) <sup>→</sup> <sup>L</sup><sup>2</sup> dρ(R) is isometric and hence it remains to show for the first part of the theorem that F is surjective.

Assume that <sup>ψ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R), <sup>ψ</sup> = 0, is orthogonal to ran <sup>F</sup>. Note that

$$(\psi, \mathcal{F}f)\_\rho = 0 \quad \Rightarrow \quad (\overline{\psi}, \mathcal{F}\overline{f})\_\rho = 0$$

and thus it follows that Re ψ and Im ψ are also orthogonal to ran F; hence it is no restriction to assume that <sup>ψ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) is real. Since <sup>ψ</sup> = 0, there exist an interval I = [α, β] and a Borel set B ⊂ I with positive ρ-measure, such that ψ(x) > 0 (or ψ(x) < 0) for all x ∈ B. By the condition (B.1.12), there exists for each <sup>x</sup><sup>0</sup> <sup>∈</sup> <sup>I</sup> a compactly supported function <sup>f</sup>x<sup>0</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) such that (Ffx<sup>0</sup> )(x0) > 0. Since Ffx<sup>0</sup> is continuous, there exists an open interval Ix<sup>0</sup> containing x<sup>0</sup> such that (Ffx<sup>0</sup> )(x) > 0 for all x ∈ Ix<sup>0</sup> . As I is compact there exist finitely many points x1,...,x<sup>n</sup> ∈ I such that

$$I \subset \bigcup\_{i=1}^{n} I\_{x\_i}.$$

Hence, B ∩ Ix<sup>j</sup> has positive ρ-measure for some j ∈ {1,...,n} and therefore

$$(\chi\_{B \cap I\_{x\_j}} \psi, \mathcal{F}f\_{x\_j})\_\rho = \int\_{\mathbb{R}} \chi\_{B \cap I\_{x\_j}}(x) \psi(x) (\mathcal{F}f\_{x\_j})(x) \, d\rho(x) \neq 0. \tag{B.1.15}$$

On the other hand, since ψ ⊥ ran F, Corollary B.1.2 implies that

$$(\chi\_{\delta}\psi, \mathcal{F}f)\_{\rho} = (\psi, \chi\_{\delta}\mathcal{F}f)\_{\rho} = (\psi, \mathcal{F}E(\delta)f)\_{\rho} = 0$$

for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>r</sup>(a, b) and all bounded open intervals <sup>δ</sup> <sup>⊂</sup> <sup>R</sup>. Hence, by the regularity of the Borel measure <sup>ρ</sup>, also (χB∩Ixj ψ, <sup>F</sup>f)<sup>ρ</sup> = 0 for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> r(a, b); this contradicts (B.1.15). Therefore, ran F is dense in L<sup>2</sup> dρ(R), and since F : L<sup>2</sup> <sup>r</sup>(a, b) <sup>→</sup> <sup>L</sup><sup>2</sup> dρ(R) is isometric, one obtains that F is surjective.

The identity (B.1.14) follows from (B.1.11), the fact that F is unitary, and Lemma 1.3.8. -

In the situation of Theorem B.1.4 the inverse of the Fourier transform F is actually given by the reverse Fourier transform G in Lemma B.1.3, which is now a unitary mapping from L<sup>2</sup> dρ(R) to L<sup>2</sup> <sup>r</sup>(a, b). Thus, for all <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) one has

$$\lim\_{\eta \to \mathbb{R}} \int\_{a}^{b} \left| \mathcal{F}^{-1} \varphi(t) - \int\_{\eta} \omega(t, x) \varphi(x) \, d\rho(x) \right|^2 r(t) dt = 0.$$

## **B.2 The vector case**

Let (a, b) be an open interval and let Δ be a measurable nonnegative 2 × 2 matrix function on (a, b). Let A be a self-adjoint relation in the corresponding Hilbert space L<sup>2</sup> <sup>Δ</sup>(a, b) and assume that its multivalued part is at most finite-dimensional. Let E(·) be the spectral measure of Aop, where Aop is the orthogonal operator part of A: it is a self-adjoint operator in L<sup>2</sup> <sup>Δ</sup>(a, b) mul A. In this case the basic ingredient is a continuous 1×2 matrix function (t, x) → <sup>ω</sup>(t, x) on (a, b)×<sup>R</sup> whose entries are real. Hence, the Fourier transform f of a compactly supported function <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) given by

$$
\widehat{f}(x) = \int\_a^b \omega(t, x)\Delta(t)f(t)\,dt, \quad x \in \mathbb{R}, \tag{B.2.1}
$$

is a well-defined complex function that is continuous on R. The main assumption is that there exists a nondecreasing function ρ on R such the identity

$$\widehat{f}(E(\delta)f,g) = \int\_{\delta} \widehat{f}(x)\overline{\widehat{g}(x)}\,d\rho(x), \quad \delta \subset \mathbb{R},\tag{B.2.2}$$

holds for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) with compact support and all bounded open intervals <sup>δ</sup> <sup>⊂</sup> <sup>R</sup>, whose endpoints are not eigenvalues of <sup>A</sup>op. An approximation argument shows that (B.2.2) remains valid for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) with compact support and all bounded open intervals <sup>δ</sup> <sup>⊂</sup> <sup>R</sup>.

It follows from the assumption (B.2.2) that for every function <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) with compact support the continuous function f belongs to <sup>L</sup><sup>2</sup> dρ(R). In the present situation Lemma B.1.1 remains valid in a slightly modified form; here F is a partial isometry with ker F = mul A.

**Lemma B.2.1.** The Fourier transform f → f in (B.2.1) extends by continuity from the compactly supported functions in L<sup>2</sup> <sup>Δ</sup>(a, b) to a partial isometry

$$\mathcal{F}: L^2\_{\Delta}(a, b) \to L^2\_{d\rho}(\mathbb{R})$$

with ker <sup>F</sup> = mul <sup>A</sup> such that for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b)

$$\lim\_{\alpha \to a, \beta \to b} \int\_{\mathbb{R}} \left| (\mathcal{F}f)(x) - \int\_{\alpha}^{\beta} \omega(t, x) \Delta(t) f(t) \, dt \right|^2 \, d\rho(x) = 0.$$

The Fourier transform F and the spectral measure E(·) are related via

$$\mathbb{E}\left(E(\delta)f,g\right) = \int\_{\delta} (\mathcal{F}f)(x) \overline{(\mathcal{F}g)(x)} \, d\rho(x) \tag{B.2.3}$$

for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b), where <sup>δ</sup> <sup>⊂</sup> <sup>R</sup> is any bounded open interval. Next one verifies as in the proof of Corollary B.1.2 that the identity

$$
\mathcal{F}(E(\delta)f) = \chi\_{\delta} \mathcal{F}f, \quad f \in L^2\_{\Delta}(a,b),
$$

holds for bounded open intervals <sup>δ</sup> <sup>⊂</sup> <sup>R</sup>. Further, (B.2.3) and an approximation argument using characteristic functions show that

$$\left( (A - \lambda)^{-1} f, g \right) = \int\_{\mathbb{R}} \frac{\mathcal{F}f(x) \overline{\mathcal{F}g(x)}}{x - \lambda} d\rho(x) \tag{B.2.4}$$

for all f,g <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) and λ ∈ ρ(A). Moreover,

$$\left( (A - \lambda)^{-1} f \right)(t) = \int\_{\mathbb{R}} \frac{\omega(t, x)^{\*}}{x - \lambda} \, \mathcal{F}f(x) \, d\rho(x), \quad \lambda \in \mathbb{C} \,\backslash \,\mathbb{R},\tag{B.2.5}$$

for almost all t ∈ (a, b) and, in particular, the integrand on the right-hand side is integrable for almost all t ∈ (a, b).

The reverse Fourier transform of a function <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) with compact support is defined by

$$
\varphi(t) = \int\_{\mathbb{R}} \omega(t, x)^\* \varphi(x) \, d\rho(x), \quad t \in (a, b). \tag{B.2.6}
$$

Then ˘ϕ is a well-defined 2 × 1 matrix function which is continuous on (a, b). By means of Lemma B.2.1 one now obtains the following result, which is similar to Lemma B.1.3. There are some slight differences in the proof for the vector case, which is provided here for completeness.

**Lemma B.2.2.** The reverse Fourier transform ϕ → ϕ˘ in (B.2.6) extends by continuity from the compactly supported functions in L<sup>2</sup> dρ(R) to a contractive mapping G : L<sup>2</sup> dρ(R) <sup>→</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) such that for all <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R)

$$\lim\_{\eta \to \mathbb{R}} \int\_{a}^{b} \left| \mathcal{G}\varphi(t) - \int\_{\eta} \omega(t, x)^{\*} \varphi(x) \, d\rho(x) \right|^{2} dt = 0. \tag{B.2.7}$$

In fact, the extension <sup>G</sup> satisfies GF<sup>f</sup> <sup>=</sup> <sup>f</sup> for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) mul A.

Proof. Step 1. The mapping ϕ → ϕ˘ takes the compactly supported functions in L2 dρ(R) contractively into L<sup>2</sup> <sup>Δ</sup>(a, b). For this, let <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) have compact support and let [α, β] <sup>⊂</sup> (a, b) be a compact interval. First observe that <sup>χ</sup>[α,β]ϕ˘ <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b), so that indeed

$$\int\_{\alpha}^{\beta} \omega(t, x) \Delta(t) \vec{\varphi}(t) \, dt = (\mathcal{F}(\chi\_{[\alpha, \beta]} \vec{\varphi}))(x).$$

Therefore, one obtains

$$\begin{split} \int\_{\alpha}^{\beta} \psi(t)^{\*} \Delta(t) \psi(t) \, dt &= \int\_{\alpha}^{\beta} \left( \int\_{\mathbb{R}} \omega(t,x)^{\*} \varphi(x) \, d\rho(x) \right)^{\*} \Delta(t) \psi(t) \, dt \\ &= \int\_{\mathbb{R}} \left( \int\_{\alpha}^{\beta} \omega(t,x) \Delta(t) \varphi(t) \, dt \right) \overline{\varphi(x)} \, d\rho(x) \\ &= \int\_{\mathbb{R}} (\mathcal{F}(\chi\_{[\alpha,\beta]} \varphi))(x) \overline{\varphi(x)} \, d\rho(x) \\ &\leq \|\mathcal{F}(\chi\_{[\alpha,\beta]} \varphi)\|\_{\rho} \|\varphi\|\_{\rho} \leq \|\chi\_{[\alpha,\beta]} \psi\| \|\varphi\|\_{\rho}, \end{split}$$

where Lemma B.2.1 was used in the last step. The above estimate gives for any compact interval [α, β] ⊂ (a, b)

$$\sqrt{\int\_{\alpha}^{\beta} \phi(t)^{\*} \Delta(t) \phi(t) \, dt} \le \|\varphi\|\_{\rho}.$$

By the monotone convergence theorem this leads to the inequality

 ϕ˘ ≤ ϕ ρ.

Hence, the mapping <sup>ϕ</sup> → <sup>ϕ</sup>˘ takes the compactly supported functions in <sup>L</sup><sup>2</sup> dρ(R) contractively into L<sup>2</sup> <sup>Δ</sup>(a, b).

Step 2. The mapping <sup>ϕ</sup> → <sup>ϕ</sup>˘ defined on the functions in <sup>L</sup><sup>2</sup> dρ(R) that have compact support can be contractively extended to all of L<sup>2</sup> dρ(R). To see this, let <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) and approximate ϕ in L<sup>2</sup> dρ(R) by functions <sup>ϕ</sup><sup>n</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) with compact support. Then <sup>ϕ</sup><sup>n</sup> <sup>−</sup> <sup>ϕ</sup><sup>m</sup> is a Cauchy sequence in <sup>L</sup><sup>2</sup> dρ(R) and since

$$\|\not\varphi\_n - \not\varphi\_m\| \le \|\varphi\_n - \varphi\_m\|\_\rho,$$

there is an element <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) such that ˘ϕ<sup>n</sup> <sup>→</sup> <sup>f</sup> in <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b). It follows from ϕ˘n ≤ ϕn <sup>ρ</sup> that in fact

$$\|f\| \le \|\varphi\|\_{\rho^\*}$$

In particular, the mapping <sup>ϕ</sup> → <sup>f</sup> from <sup>L</sup><sup>2</sup> dρ(R) to L<sup>2</sup> <sup>Δ</sup>(a, b) is a contraction. Hence, the operator G given by Gϕ = f is well defined and takes L<sup>2</sup> dρ(R) contractively to L2 <sup>Δ</sup>(a, b). The assertion in (B.2.7) is clear by multiplying <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) by appropriately chosen characteristic functions.

Step 3. The extended mapping G is a left inverse of the Fourier transform F. For this, first note that it follows from (B.2.3) and dominated convergence that

$$(f,g) = \int\_{\mathbb{R}} (\mathcal{F}f)(x) \overline{(\mathcal{F}g)(x)} \, d\rho(x)$$

for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) mul <sup>A</sup> and <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b). Thus, by means of the Fubini theorem, one concludes for <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) mul <sup>A</sup> and <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b), each having compact support, that

$$\begin{aligned} \mathcal{I}(f,g) &= \lim\_{\eta \to \mathbb{R}} \int\_{\eta} (\mathcal{F}f)(x) \left( \int\_{a}^{b} g(t)^{\*} \Delta(t) \omega(t,x)^{\*} \, dt \right) \, d\rho(x) \\ &= \lim\_{\eta \to \mathbb{R}} \int\_{a}^{b} g(t)^{\*} \Delta(t) \left( \int\_{\eta} \omega(t,x)^{\*} (\mathcal{F}f)(x) \, d\rho(x) \right) \, dt \\ &= (\mathcal{G}\mathcal{F}f,g), \end{aligned}$$

where, in the last step, (B.2.7) and the continuity of the inner product have been used. This implies

$$\mathcal{G}\mathcal{F}f = f$$

for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) mul <sup>A</sup> with compact support. Since the functions in <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) with compact support are dense in L<sup>2</sup> <sup>Δ</sup>(a, b) and mul A is finite-dimensional by assumption, it follows that the functions with compact support are also dense in L2 <sup>Δ</sup>(a, b) mul A. Therefore, one obtains from the contractivity of F and G that GF<sup>f</sup> <sup>=</sup> <sup>f</sup> for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) mul <sup>A</sup>. -

Recall that multiplication by the independent variable in the Hilbert space L2 dρ(R) generates a self-adjoint operator Q whose resolvent is given by

$$(Q - \lambda)^{-1} = \frac{1}{x - \lambda}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}.$$

Hence, by (B.2.4), one sees that

$$\left( (A - \lambda)^{-1} f, g \right) = \left( (Q - \lambda)^{-1} \mathcal{F} f, \mathcal{F} g \right)\_{\rho} = \left( \mathcal{F}^\* (Q - \lambda)^{-1} \mathcal{F} f, g \right)\_{\rho}$$

for all <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) and <sup>g</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b), which leads to

$$(A - \lambda)^{-1} = \mathcal{F}^\*(Q - \lambda)^{-1}\mathcal{F}, \qquad \lambda \in \mathbb{C} \backslash \mathbb{R}. \tag{B.2.8}$$

Consider the restriction Fop : L<sup>2</sup> <sup>Δ</sup>(a, b) mul <sup>A</sup> <sup>→</sup> <sup>L</sup><sup>2</sup> dρ(R) of the partial isometry F onto L<sup>2</sup> <sup>Δ</sup>(a, b) mul A and recall that

$$\ker \mathcal{F} = \text{mult} \, A = \ker \left( A - \lambda \right)^{-1}, \qquad \lambda \in \mathbb{C} \mid \mathbb{R}.$$

Then Fop is an isometry and (B.2.8) leads to

$$(A\_{\mathrm{op}} - \lambda)^{-1} = \mathcal{F}\_{\mathrm{op}}^\*(Q - \lambda)^{-1} \mathcal{F}\_{\mathrm{op}}, \qquad \lambda \in \mathbb{C} \ \backslash \mathbb{R}.$$

In general, here the restricted Fourier transform Fop is not necessarily onto.

In order to prove that Fop is surjective one needs an additional condition:

$$\begin{array}{c} \text{for each } x\_0 \in \mathbb{R} \text{ there exists a compactly} \\ \text{supported function } f \in L^2\_\Delta(a, b) \text{ such that } (\mathcal{F}\_\text{op}f)(x\_0) \neq 0. \end{array} \tag{B.2.9}$$

For the following result, recall that it is assumed that mul A is finite-dimensional.

**Theorem B.2.3.** Let ω be a continuous 1 × 2 matrix function (t, x) → ω(t, x) on (a, b) <sup>×</sup> <sup>R</sup> whose entries are real, and assume that the conditions (B.2.2) and (B.2.9) are satisfied. Then the Fourier transform

$$f \mapsto \widehat{f}, \quad \widehat{f}(x) = \int\_{a}^{b} \omega(t, x) \Delta(t) f(t) \, dt, \quad x \in \mathbb{R},$$

extends by continuity from the compactly supported functions <sup>f</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> <sup>Δ</sup>(a, b) to a surjective partial isometry F from L<sup>2</sup> <sup>Δ</sup>(a, b) to L<sup>2</sup> dρ(R) with ker F = mul A, that is, the restriction

$$\mathcal{F}\_{\mathrm{op}} : L^2\_{\Delta}(a, b) \ominus \mathrm{mul} \, A \to L^2\_{d\rho}(\mathbb{R})$$

is a unitary mapping. Moreover, the self-adjoint operator Aop in L<sup>2</sup> <sup>Δ</sup>(a, b) mul A is unitarily equivalent to the multiplication operator Q by the independent variable in L<sup>2</sup> dρ(R) via the restricted Fourier transform Fop:

$$A\_{\rm op} = \mathcal{F}\_{\rm op}^\* Q \mathcal{F}\_{\rm op}. \tag{B.2.10}$$

Proof. Is is clear from Lemma B.2.1 that F : L<sup>2</sup> <sup>Δ</sup>(a, b) <sup>→</sup> <sup>L</sup><sup>2</sup> dρ(R) is a partial isometry with ker F = mul A. One verifies in the same way as in the proof of Theorem B.1.4 that F is surjective, and hence Fop is unitary. The identity (B.2.10) follows from (B.2.8). -

In the situation of Theorem B.2.3 the inverse of the Fourier transform Fop is actually given by the reverse Fourier transform G in Lemma B.2.2, which is now a unitary mapping from L<sup>2</sup> dρ(R) to L<sup>2</sup> <sup>Δ</sup>(a, b) mul <sup>A</sup>. Thus, for all <sup>ϕ</sup> <sup>∈</sup> <sup>L</sup><sup>2</sup> dρ(R) one has

$$\lim\_{\eta \to \mathbb{R}} \int\_{a}^{b} \left| \Delta(t)^{\frac{1}{2}} \left( \mathcal{F}\_{\text{op}}^{-1} \varphi(t) - \int\_{\eta} \omega(t, x)^{\*} \varphi(x) \, d\rho(x) \right) \right|^{2} dt = 0.$$

## **Appendix C Sums of Closed Subspaces in Hilbert Spaces**

In this appendix the sum of closed (linear) subspaces in a Hilbert space is discussed and, in particular, conditions are given so that sums of closed subspaces are closed. There is also a brief review of the opening and gap of closed subspaces.

In the following M and N will be closed subspaces of a Hilbert space H. The first lemma on the sum of closed subspaces is preliminary.

**Lemma C.1.** Let M and N be closed subspaces of H. Then the following statements are equivalent:


$$\rho\sqrt{\|f\|^2 + \|g\|^2} \le \|f + g\|, \quad f \in \mathfrak{M}, \ g \in \mathfrak{N}.\tag{C.1}$$

Proof. (i) ⇒ (ii) Assume that M + N is closed and M ∩ N = {0}. The projection from the Hilbert space M+N onto M parallel to N is a closed, everywhere defined operator. Hence, by the closed graph theorem, the projection is bounded and there exists C > 0 such that

$$\|f\| \le C\|f+g\|, \quad f \in \mathfrak{M}, \ g \in \mathfrak{N}.$$

Likewise, there exists D > 0 such that

$$\|g\| \le D \|f + g\|, \quad f \in \mathfrak{M}, \ g \in \mathfrak{N}.$$

A combination of these inequalities leads to

$$\left\|f\right\|^2 + \left\|g\right\|^2 \le (C^2 + D^2) \left\|f + g\right\|^2, \quad f \in \mathfrak{M}, \ g \in \mathfrak{N}.$$

By eventually enlarging C<sup>2</sup> + D<sup>2</sup> the inequality (C.1) follows with 0 <ρ< 1.

<sup>©</sup> The Editor(s) (if applicable) and The Author(s) 2020 J. Behrndt et al., *Boundary Value Problems, Weyl Functions, and Differential Operators*, Monographs in Mathematics 108, https://doi.org/10.1007/978-3-030-36714-5

(ii) ⇒ (i) Let (hn) be a sequence in M + N converging to h ∈ H. Decompose each element h<sup>n</sup> as

$$h\_n = f\_n + g\_n, \quad f\_n \in \mathfrak{M}, \quad g\_n \in \mathfrak{N}.$$

Since (hn) is a Cauchy sequence in H it follows from (C.1) that (fn) and (gn) are Cauchy sequences in M and N, respectively. Therefore, there exist elements f ∈ M and g ∈ N, so that f<sup>n</sup> → f in M and g<sup>n</sup> → g in N. Hence, h = f + g ∈ M + N. Thus, M + N is closed. To see that the sum is direct, assume that h ∈ M ∩ N. Then, in particular, h ∈ M and −h ∈ N and the inequality (C.1) with f = h and <sup>g</sup> <sup>=</sup> <sup>−</sup><sup>h</sup> implies <sup>h</sup> = 0. -

Let M and N be closed subspaces of H. Then the intersection M ∩ N is a closed subspace which generates an orthogonal decomposition of the Hilbert space:

$$
\mathfrak{H} = (\mathfrak{M} \cap \mathfrak{M})^{\perp} \oplus (\mathfrak{M} \cap \mathfrak{N})^{\perp}
$$

In order to study properties of the sums M + N and M<sup>⊥</sup> + N<sup>⊥</sup> it is sometimes useful to reduce to a direct sum.

**Lemma C.2.** Let M and N be closed subspaces of H. Then the subspaces

$$
\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp \quad \text{and} \quad \mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp
$$

have a trivial intersection and due to

$$[(\mathfrak{M}\cap\mathfrak{N})^{\perp}=[\mathfrak{M}\cap(\mathfrak{M}\cap\mathfrak{N})^{\perp}]\oplus\mathfrak{M}^{\perp}=[\mathfrak{N}\cap(\mathfrak{M}\cap\mathfrak{N})^{\perp}]\oplus\mathfrak{N}^{\perp}\tag{C.2}$$

their orthogonal complements in the subspace (M ∩ N)<sup>⊥</sup> coincide with M<sup>⊥</sup> and N⊥, respectively. Moreover,

$$\begin{aligned} \mathfrak{M} &= [\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{M})^\perp] \oplus (\mathfrak{M} \cap \mathfrak{M}), \\ \mathfrak{M} &= [\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{M})^\perp] \oplus (\mathfrak{M} \cap \mathfrak{M}), \end{aligned} \tag{C.3}$$

and, consequently, M + N has the decomposition

$$\mathfrak{M} + \mathfrak{N} = \left[ \mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp + \mathfrak{N} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp \right] \oplus (\mathfrak{M} \cap \mathfrak{N}).\tag{C.4}$$

Proof. It is clear that M∩(M∩N)<sup>⊥</sup> and N∩(M∩N)<sup>⊥</sup> have a trivial intersection. To see the first identity in (C.2) note that

$$(\mathfrak{M}\cap(\mathfrak{M}\cap\mathfrak{N})^\perp \subset (\mathfrak{M}\cap\mathfrak{N})^\perp \quad \text{and} \quad \mathfrak{M}^\perp \subset (\mathfrak{M}\cap\mathfrak{N})^\perp,$$

so that the right-hand side is contained in the left-hand side. To see the other inclusion, decompose H as

$$
\mathfrak{H} = \mathfrak{M} \oplus \mathfrak{M}^{-}.
$$

Let f ∈ (M ∩ N)⊥. Then according to this decomposition one has

$$f = g + h, \quad g \in \mathfrak{M}, \quad h \in \mathfrak{M}^{\perp}.$$

Since M<sup>⊥</sup> ⊂ (M ∩ N)<sup>⊥</sup> one sees that g ∈ M ∩ (M ∩ N)⊥. Hence, the left-hand side is contained in the right-hand side. The second identity in (C.2) follows by interchanging M and N.

Since M∩N is a closed subspace, it is clear that H has the following orthogonal decomposition

$$
\mathfrak{H} = (\mathfrak{M} \cap \mathfrak{M})^{\perp} \oplus (\mathfrak{M} \cap \mathfrak{N})^{\perp}
$$

Hence, in an analogous way as above, the decompositions (C.3) are clear. Thus, (C.4) follows. -

The above reduction process will play a role in the following theorem.

**Theorem C.3.** Let M and N be closed subspaces of H. Then the following statements are equivalent:


Furthermore, the following statements are equivalent:

(iii) M + N is closed and M ∩ N = {0};

(iv) M<sup>⊥</sup> + N<sup>⊥</sup> = H,

and the following statements are equivalent:


Proof. (iii) ⇒ (iv) Assume that M∩N = {0}. It will be shown that H = M<sup>⊥</sup> +N<sup>⊥</sup> or, equivalently, that H ⊂ M<sup>⊥</sup> +N⊥. To see this, choose h ∈ H. The element h ∈ H induces two linear functionals F and G on M + N by

$$F(\gamma) = (\alpha, h), \quad G(\gamma) = (\beta, h), \quad \gamma = \alpha + \beta, \quad \alpha \in \mathfrak{M}, \ \beta \in \mathfrak{M}.\tag{C.5}$$

Since M + N is closed and M ∩ N = {0}, it follows from the inequality (C.1) that the functionals F and G are bounded on M + N. Extend the functionals F and G trivially to bounded linear functionals on all of H, and denote the extensions by F and G, respectively. By the Riesz representation theorem there exist unique elements f,g ∈ H such that

$$F(\gamma) = (\gamma, f), \quad G(\gamma) = (\gamma, g), \quad \gamma \in \mathfrak{H}.\tag{C.6}$$

The definition in (C.5) implies that F(γ)=0, γ ∈ N, and G(γ) = 0, γ ∈ M. For the corresponding elements f and g in (C.6) this means that f ∈ N<sup>⊥</sup> and g ∈ M⊥. Now for γ = α + β, α ∈ M, β ∈ N, it follows from (C.5) and (C.6) that

$$f(\gamma, h) = (\alpha, h) + (\beta, h) = F(\gamma) + G(\gamma) = (\gamma, f) + (\gamma, g).$$

This implies that k = h − f − g ∈ (M + N)<sup>⊥</sup> ⊂ M<sup>⊥</sup> and hence h = f + g + k with f ∈ N<sup>⊥</sup> and g + k ∈ M⊥. Therefore, H ⊂ M<sup>⊥</sup> + N⊥, which completes the proof.

(i) ⇒ (ii) Use the reduction process from Lemma C.2. Since M+ N is assumed to be closed it follows from (C.4) that

$$
\Delta \mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp + \mathfrak{N} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp
$$

is closed in (M ∩ N)⊥. Since this is a direct sum, the sum of their orthogonal complements is closed in (M∩N)<sup>⊥</sup> by the implication (iii) ⇒ (iv). Recall that the sum of their orthogonal complements coincides with M<sup>⊥</sup> + N⊥.

(ii) ⇒ (i) This follows from the previous implication by symmetry.

(iv) ⇒ (iii) Apply (ii) ⇒ (i) to conclude that M + N is closed. This sum is direct since {0} = (M<sup>⊥</sup> + N⊥)<sup>⊥</sup> by assumption.

(v) ⇔ (vi) It suffices to show (v) ⇒ (vi). Note that it follows from (iii) ⇒ (iv) that M<sup>⊥</sup> + N<sup>⊥</sup> = H. Moreover, M<sup>⊥</sup> ∩ N<sup>⊥</sup> = (M + N)<sup>⊥</sup> = {0} by assumption, which completes the argument. -

Let again M and N be closed subspaces of H and assume that M + N is closed. It follows from (C.3) that M + N can be written as

$$\begin{split} \mathfrak{M} + \mathfrak{N} &= \left( [\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp] \oplus (\mathfrak{M} \cap \mathfrak{N}) \right) + \mathfrak{N} \\ &= [\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp] + \mathfrak{N} .\end{split} \tag{C.7}$$

Since M + N is closed, the orthogonal decomposition H = (M + N)<sup>⊥</sup> ⊕ (M + N) and (C.7) show that

$$\begin{aligned} \mathfrak{H} &= \left( (\mathfrak{M} + \mathfrak{M})^\perp \oplus [\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{M})^\perp] \right) + \mathfrak{M} \\ \mathfrak{H} &= (\mathfrak{M} + \mathfrak{M})^\perp \oplus \left( [\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{M})^\perp] + \mathfrak{M} \right) \end{aligned} \tag{C.8}$$

The sum in the identity (C.8) is direct. To see this, assume that

$$f + g + \varphi = 0, \quad f \in (\mathfrak{M} + \mathfrak{N})^\perp, \quad g \in \mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp, \quad \varphi \in \mathfrak{N}.$$

Then f = −g − ϕ, where −g − ϕ ∈ M+ N. Hence, f = 0 and g = −ϕ implies that g = 0 and ϕ = 0.

The decompositions (C.7) and (C.8) will be used in the proof of the next lemma.

**Lemma C.4.** Let B ∈ **B**(H, K) have a closed range and let M be a closed subspace of H. Then M + ker B is closed if and only if B(M) is closed.

Proof. Assume that M + ker B is closed and set N = ker B. It follows from the direct sum decomposition (C.8) that B maps

$$(\mathfrak{M} + \mathfrak{M})^\perp \oplus [\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{M})^\perp]$$

bijectively onto the closed subspace ran B and hence B also provides a bijection between the closed subspaces of (M + N)<sup>⊥</sup> ⊕ [M ∩ (M ∩ N)⊥] and those of ran B. In particular, it follows from this observation and (C.7) that

$$B(\mathfrak{M}) = B\left(\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{M})^\perp\right),$$

is closed in K.

For the converse statement assume that B(M) is closed. In order to show that M + ker B is closed, let f<sup>n</sup> ∈ M and ϕ<sup>n</sup> ∈ ker B have the property that

$$f\_n + \varphi\_n \to \chi$$

for some χ ∈ H. In particular, this shows that Bf<sup>n</sup> → Bχ. The assumption that B(M) is closed implies that Bf<sup>n</sup> → Bf for some f ∈ M. Hence,

$$
\chi - f = \varphi \in \ker B \quad \text{or} \quad \chi = f + \varphi \in \mathfrak{M} + \ker B.
$$

It follows that M + ker B is closed. -

There is another way to approach the topic of sums of closed subspaces, namely via various notions of opening or gap between closed subspaces.

**Definition C.5.** Let M and N be closed subspaces of H with corresponding orthogonal projections P<sup>M</sup> and PN, respectively. The opening ω(M, N) between M and N is defined as

$$\omega(\mathfrak{M}, \mathfrak{N}) = \|P\_{\mathfrak{M}} P\_{\mathfrak{N}}\|\,.$$

It is clear that 0 ≤ ω(M, N) ≤ 1 and that ω(M, N) = ω(N,M), since one has A = A<sup>∗</sup> for any A ∈ **B**(H).

**Proposition C.6.** Let M and N be closed subspaces of H. Then the following statements are equivalent:


Proof. (i) ⇒ (ii) Assume that ω(M, N) < 1. Observe that for f ∈ M and g ∈ N one has

$$|(f,g)| = |(P\_{\mathfrak{M}}f, P\_{\mathfrak{N}}g)| \le \omega(\mathfrak{M}, \mathfrak{N}) ||f|| ||g||.$$

It follows that for all f ∈ M and g ∈ N

$$\begin{aligned} \|f\|^2 + \|g\|^2 &= \|f + g\|^2 - 2\text{Re}\,(f, g) \\ &\le \|f + g\|^2 + 2|\langle f, g\rangle| \\ &\le \|f + g\|^2 + 2\omega(\mathfrak{M}, \mathfrak{N}) \|f\| \|g\| \\ &\le \|f + g\|^2 + \omega(\mathfrak{M}, \mathfrak{N}) \left(\|f\|^2 + \|g\|^2\right). \end{aligned}$$

In particular, this shows that

$$(1 - \omega(\mathfrak{M}, \mathfrak{N})) \left( \|f\|^2 + \|g\|^2 \right) \le \|f + g\|^2, \quad f \in \mathfrak{M}, \quad g \in \mathfrak{N}.$$

Hence, Lemma C.1 implies that M ∩ N = {0} and that M + N is closed, which gives (ii).

(ii) ⇒ (i) Assume that M + N is closed and M ∩ N = {0}. By Lemma C.1, the inequality (C.1) holds for some 0 <ρ< 1, hence for all f ∈ M and g ∈ N

$$\rho^2 \left( \|f\|^2 + \|g\|^2 \right) \le \|f\|^2 + 2\text{Re}\left(f, g\right) + \|g\|^2$$

or, equivalently,

$$-2\text{Re}\left(f,g\right)\le\left(1-\rho^2\right)\left(\|f\|^2+\|g\|^2\right).\tag{C.9}$$

Now let h, k ∈ H with h ≤ 1 and k <sup>≤</sup> 1 and choose <sup>θ</sup> <sup>∈</sup> <sup>R</sup> such that

$$|e^{i\theta}(P\_{\mathfrak{M}}h, P\_{\mathfrak{N}}k) = -|(P\_{\mathfrak{M}}h, P\_{\mathfrak{N}}k)|.$$

Then (eiθPMh, PNk) <sup>∈</sup> <sup>R</sup> and (C.9) with <sup>f</sup> <sup>=</sup> <sup>e</sup>iθPM<sup>h</sup> and <sup>g</sup> <sup>=</sup> <sup>P</sup>N<sup>k</sup> yields

$$\begin{aligned} |(h, P\_{\mathfrak{M}} P\_{\mathfrak{N}} k)| &= |(P\_{\mathfrak{M}} h, P\_{\mathfrak{N}} k)| \\ &= -\operatorname{Re}\left(e^{i\theta} P\_{\mathfrak{M}} h, P\_{\mathfrak{N}} k\right) \\ &\leq \frac{1 - \rho^2}{2} \left( ||e^{i\theta} P\_{\mathfrak{M}} h||^2 + ||P\_{\mathfrak{N}} k||^2 \right) \\ &\leq 1 - \rho^2. \end{aligned}$$

This implies PMPN <sup>≤</sup> <sup>1</sup> <sup>−</sup> <sup>ρ</sup><sup>2</sup> <sup>&</sup>lt; 1 and hence (i) holds. -

**Corollary C.7.** Let M and N be closed subspaces of H such that M + N is closed and M ∩ N = {0}. Then for closed subspaces M<sup>1</sup> ⊂ M and N<sup>1</sup> ⊂ N also the subspace M<sup>1</sup> + N<sup>1</sup> is closed.

Proof. This statement follows immediately from Proposition C.6 by noting that <sup>ω</sup>(M1, <sup>N</sup>1) <sup>≤</sup> <sup>ω</sup>(M, <sup>N</sup>) <sup>&</sup>lt; 1. -

Proposition C.6 and the reduction of closed subspaces in Lemma C.2 leads to the following corollary.

**Corollary C.8.** Let M and N be closed subspaces of H. Then the following statements are equivalent:


Proof. Since the closed subspaces M ∩ (M ∩ N)<sup>⊥</sup> and N ∩ (M ∩ N)<sup>⊥</sup> have trivial intersection, it follows from Proposition C.6 that (i) is equivalent to

$$\mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp + \mathfrak{M} \cap (\mathfrak{M} \cap \mathfrak{N})^\perp \tag{C.10}$$

is closed. Furthermore, the decomposition (C.4) shows that the space in (C.10) is closed if and only if M + N is closed. -

The notion of opening is now supplemented with the notion of gap between closed subspaces.

**Definition C.9.** Let M and N be closed subspaces of H with corresponding orthogonal projections P<sup>M</sup> and PN, respectively. The gap g(M, N) between M and N is defined as

$$g(\mathfrak{M}, \mathfrak{N}) = ||P\_{\mathfrak{M}} - P\_{\mathfrak{N}}||.$$

Note that it follows directly from the definition that

$$g(\mathfrak{M}, \mathfrak{N}) = g(\mathfrak{N}, \mathfrak{M}) \quad \text{and} \quad g(\mathfrak{M}^\perp, \mathfrak{N}^\perp) = g(\mathfrak{M}, \mathfrak{N}).\tag{C.11}$$

The connection between the gap and the opening between closed subspaces is contained in the following proposition.

**Proposition C.10.** Let M and N be closed subspaces in H. Then

$$g(\mathfrak{M}, \mathfrak{N}) = \max\left\{\omega(\mathfrak{M}, \mathfrak{N}^{\perp}), \omega(\mathfrak{M}^{\perp}, \mathfrak{N})\right\} \tag{C.12}$$

and, in particular, g(M, N) ≤ 1.

Proof. First observe that

$$P\_{\mathfrak{N}^\perp}P\_{\mathfrak{N}} = (I - P\_{\mathfrak{N}})P\_{\mathfrak{N}} = (P\_{\mathfrak{N}} - P\_{\mathfrak{N}})P\_{\mathfrak{N}^\perp}$$

so that ω(M, N⊥) = ω(N⊥,M) ≤ g(M, N) PM ≤ g(M, N). Therefore, also

$$\omega(\mathfrak{M}^\perp, \mathfrak{N}) \le g(\mathfrak{M}^\perp, \mathfrak{N}^\perp) = g(\mathfrak{M}, \mathfrak{N})$$

and hence

$$\max\left\{\omega(\mathfrak{M},\mathfrak{N}^\perp), \omega(\mathfrak{M}^\perp,\mathfrak{N})\right\} \le g(\mathfrak{M},\mathfrak{N}).$$

Furthermore, observe that for all h ∈ H

$$\begin{aligned} \|(P\_{\mathfrak{M}} - P\_{\mathfrak{N}})h\|^2 &= \|(I - P\_{\mathfrak{N}})P\_{\mathfrak{M}}h - P\_{\mathfrak{N}}(I - P\_{\mathfrak{M}})h\|^2 \\ &= \|P\_{\mathfrak{N}^\perp}P\_{\mathfrak{M}}h\|^2 + \|P\_{\mathfrak{N}}P\_{\mathfrak{M}^\perp}h\|^2 \\ &= \|P\_{\mathfrak{N}^\perp}P\_{\mathfrak{M}}P\_{\mathfrak{M}}h\|^2 + \|P\_{\mathfrak{N}}P\_{\mathfrak{M}^\perp}P\_{\mathfrak{M}^\perp}h\|^2 \\ &\leq \omega(\mathfrak{M}, \mathfrak{N}^\perp)^2 \|P\_{\mathfrak{M}}h\|^2 + \omega(\mathfrak{M}^\perp, \mathfrak{N})^2 \|P\_{\mathfrak{M}^\perp}h\|^2, \end{aligned}$$

and the last term is majorized by

$$\max\left\{\omega(\mathfrak{M},\mathfrak{N}^\perp)^2, \omega(\mathfrak{M}^\perp,\mathfrak{N})^2\right\} \left(\|P\_{\mathfrak{M}}h\|^2 + \|P\_{\mathfrak{M}^\perp}h\|^2\right).$$

It follows that

$$g(\mathfrak{M}, \mathfrak{N}) \le \max\left\{\omega(\mathfrak{M}, \mathfrak{N}^\perp), \omega(\mathfrak{M}^\perp, \mathfrak{N})\right\}.$$

Therefore, (C.12) has been established. -

**Proposition C.11.** Let M and N be closed subspaces of H. Then

g(M, N) < 1 ⇒ dimM = dim N.

Proof. If g(M, N) = P<sup>M</sup> − P<sup>N</sup> < 1, then the bounded operator

$$I - (P\mathfrak{m} - P\mathfrak{n})$$

maps H onto itself with bounded inverse. Hence,

$$P\_{\mathfrak{M}}P\_{\mathfrak{N}}\mathfrak{H} = P\_{\mathfrak{M}}\left[I - (P\_{\mathfrak{M}} - P\_{\mathfrak{N}})\right]\mathfrak{H} = P\_{\mathfrak{M}}\mathfrak{H},$$

so that P<sup>M</sup> maps ran P<sup>N</sup> onto ran PM. Observe that for h ∈ ran P<sup>N</sup>

$$\|\|P\_{\mathfrak{M}}h\|\| = \|h + (P\_{\mathfrak{M}} - P\_{\mathfrak{N}})h\|\| \ge \left(1 - \|\|P\_{\mathfrak{M}} - P\_{\mathfrak{N}}\|\|\right) \|h\|\|$$

and hence P<sup>M</sup> maps ran P<sup>N</sup> boundedly and boundedly invertible onto ran PM. This implies that the dimensions of M = ran P<sup>M</sup> and N = ran P<sup>N</sup> coincide. -

**Theorem C.12.** Let M and N be closed subspaces of H. Then the following statements are equivalent:


and in this case dimM = dim N<sup>⊥</sup> and dimM<sup>⊥</sup> = dim N.

Proof. (i) ⇒ (ii) Proposition C.10 implies ω(M, N) < 1 and ω(M⊥, N⊥) < 1. By Proposition C.6, the condition ω(M, N) < 1 implies that M+N is closed and that M∩N = {0}, and in the same way ω(M⊥, N⊥) < 1 yields that M<sup>⊥</sup> +N<sup>⊥</sup> is closed and that M<sup>⊥</sup> ∩ N<sup>⊥</sup> = {0}. The identity M<sup>⊥</sup> ∩ N<sup>⊥</sup> = {0} implies that M + N is dense in H. Therefore, M + N = H.

(ii) ⇒ (i) It follows from Proposition C.6 that ω(M, N) < 1. Moreover, the equivalence of (v) and (vi) in Theorem C.3 shows M<sup>⊥</sup> + N<sup>⊥</sup> = H and M<sup>⊥</sup> ∩ N<sup>⊥</sup> = {0}, and therefore ω(M⊥, N⊥) < 1 by Proposition C.6. Now Proposition C.10 implies that g(M, N⊥) < 1.

It is clear from (i) and Proposition C.11 that dimM = dim N⊥. Furthermore, as g(M⊥, N) = g(M, N⊥) by (C.11), the same argument gives dim M<sup>⊥</sup> = dim N. -

## **Appendix D**

## **Factorization of Bounded Linear Operators**

This appendix contains a number of results pertaining to the factorization of bounded linear operators based on range inclusions or norm inequalities. These results will be useful in conjunction with range inclusions or norm inequalities for relations.

Let H and K be Hilbert spaces and let H ∈ **B**(H, K). The restriction of H to (ker H)<sup>⊥</sup> ⊂ H,

$$H: \text{ (ker } H)^\perp \to \text{ran } H,$$

is a bijective mapping between (ker H)<sup>⊥</sup> and ran H. The inverse of this restriction is a (linear) operator from ran H onto (ker H)<sup>⊥</sup> and is called the Moore–Penrose inverse of the operator H, which in general is an unbounded operator. Since H = ker H ⊕ (ker H)⊥, one sees immediately that

$$H^{(-1)}H = P\_{(\ker H)^\perp} \,. \tag{D.1}$$

Moreover, the closed graph theorem shows that ran H is closed if and only if H(−1) takes ran H boundedly into (ker H)⊥. Hence, if ran H is closed, then

$$H^{(-1)} \in \mathbf{B}\left(\operatorname{ran} H, (\ker H)^{\perp}\right) \quad \text{and} \quad (H^{(-1)})^\times \in \mathbf{B}\left((\ker H)^{\perp}, \operatorname{ran} H\right), \tag{D.2}$$

where <sup>×</sup> denotes the adjoint with respect to the scalar products in ran H and (ker H)⊥.

**Lemma D.1.** Let H and K be Hilbert spaces and let H ∈ **B**(H, K). Then ran H is closed if and only if ran H<sup>∗</sup> is closed.

Proof. Since H ∈ **B**(H, K), it suffices to show that if ran H is closed in K, then ran H<sup>∗</sup> is closed in H. Since ran H<sup>∗</sup> = (ker H)⊥, one only needs to show

$$(\ker H)^\perp \subset \operatorname{ran} H^\*.$$

J. Behrndt et al., *Boundary Value Problems, Weyl Functions, and Differential Operators*, Monographs in Mathematics 108, https://doi.org/10.1007/978-3-030-36714-5

<sup>©</sup> The Editor(s) (if applicable) and The Author(s) 2020

For this let h ∈ (ker H)<sup>⊥</sup> and f ∈ H. Then it follows from (D.1) and (D.2) that with (H(−1))<sup>×</sup> <sup>∈</sup> **<sup>B</sup>**((ker <sup>H</sup>)⊥,ran <sup>H</sup>) in (D.2) one has

$$\begin{aligned} (h, f)\_{\mathfrak{H}} &= (h, P\_{\text{(ker } H)^\perp} f)\_{\text{(ker } H)^\perp} \\ &= \left( h, H^{(-1)} H f \right)\_{\text{(ker } H)^\perp} \\ &= \left( (H^{(-1)})^\times h, H f \right)\_{\text{ran } H} \\ &= \left( (H^{(-1)})^\times h, H f \right)\_{\mathfrak{H}} \\ &= \left( H^\* (H^{(-1)})^\times h, f \right)\_{\mathfrak{H}} \end{aligned}$$

and it follows that <sup>h</sup> <sup>=</sup> <sup>H</sup>∗(H(−1))×h. Hence, <sup>h</sup> <sup>∈</sup> ran <sup>H</sup><sup>∗</sup> and therefore ran <sup>H</sup><sup>∗</sup> is closed. -

Lemma D.1 has a direct consequence, which is useful.

**Lemma D.2.** For H ∈ **B**(H, K) there are the inclusions

$$
\overline{\text{ran}}\,HH^\* \subset \text{ran}\,H \subset \overline{\text{ran}}\,H = \overline{\text{ran}}\,HH^\*.\tag{D.3}
$$

Moreover, the following statements are equivalent:


in which case all inclusions in (D.3) are identities.

Proof. The chain of inclusions in (D.3) is clear; the last equality is a consequence of the identity ker H<sup>∗</sup> = ker HH∗.

(i) ⇒ (iii) Assume that ran HH<sup>∗</sup> is closed. Then all inclusions in (D.3) are equalities and, in particular, (iii) follows.

(iii) ⇒ (ii) Assume that ran HH<sup>∗</sup> = ran H. It will be sufficient to show that ran H<sup>∗</sup> ⊂ ran H∗, which implies that ran H<sup>∗</sup> is closed and hence also ran H is closed by Lemma D.1. Assume that h ∈ ran H<sup>∗</sup> = (ker H)⊥. By the assumption, it follows that Hh = HH∗k for some k, which gives H(h − H∗k) = 0. Note that h ∈ (ker H)<sup>⊥</sup> and H∗k ∈ ran H<sup>∗</sup> ⊂ ran H<sup>∗</sup> = (ker H)⊥, and hence h = H∗k. Thus, ran H<sup>∗</sup> ⊂ ran H<sup>∗</sup> which implies that ran H<sup>∗</sup> is closed, so that also ran H is closed by Lemma D.1.

(ii) ⇒ (i) Assume that ran H is closed. Then it follows from (D.3) that

$$
\overline{\text{ran}}\,HH^\* = \text{ran}\,H.\tag{\text{D.4}}
$$

It will suffice to show that ran HH<sup>∗</sup> ⊂ ran HH∗. Assume that k ∈ ran HH∗. Then k = Hh for some h ∈ (ker H)<sup>⊥</sup> by (D.4). As (ker H)<sup>⊥</sup> = ran H<sup>∗</sup> = ran H<sup>∗</sup> by the assumption (ii) and Lemma D.1, one has h ∈ ran H<sup>∗</sup> and therefore k ∈ ran HH∗. This shows that ran HH<sup>∗</sup> <sup>⊂</sup> ran HH<sup>∗</sup> which implies that ran HH<sup>∗</sup> is closed. -

Let F, G, and H be Hilbert spaces, let A ∈ **B**(F, H), B ∈ **B**(G, H), and C ∈ **B**(F, G), and assume that the following factorization holds:

$$A = BC.\tag{D.5}$$

Then it follows from (D.5) that ker C ⊂ ker A. Note that the orthogonal decomposition G = ker B⊕ran B<sup>∗</sup> allows one to write A = BPC, where P is the orthogonal projection in G onto ran B∗. Hence, one may always assume that ran C ⊂ ran B∗. In this case ker C = ker A and hence C maps (ker A)<sup>⊥</sup> = ran A<sup>∗</sup> into ran B<sup>∗</sup> injectively. Moreover, in this case the operator C in (D.5) is uniquely determined. The following proposition is a version of the well-known Douglas lemma.

**Proposition D.3.** Assume that A ∈ **B**(F, H) and B ∈ **B**(G, H). Let ρ > 0, then the following statements are equivalent:


Moreover, if ran A ⊂ ran B, then <sup>B</sup>(−1)<sup>ϕ</sup> ≤ ρ <sup>A</sup>(−1)<sup>ϕ</sup> , ϕ ∈ ran A, for some ρ > 0.

Proof. (i) ⇒ (ii) It is clear that ran A ⊂ ran B and therefore also

$$B^{(-1)}A = B^{(-1)}B \\ C = P\_{(\ker B)^\perp} \\ C = P\_{\overline{\mathtt{T}} \mathtt{T} \mathtt{B}^\*} \\ C = C.$$

From the last identity one sees that

$$\|\|B^{(-1)}A\psi\|\| \le \|C\| \|\|\psi\|\|, \quad \psi \in \mathfrak{F}.$$

Now let ϕ ∈ ran A. Then there exists ψ ⊥ ker A such that ϕ = Aψ. Therefore, ψ = A(−1)ϕ and it follows immediately that

$$\|\|B^{(-1)}\varphi\|\| \le \|C\| \|\|A^{(-1)}\varphi\|\|, \quad \varphi \in \text{ran}\,A.$$

Hence, (ii) has been shown.

(ii) ⇒ (i) It follows from (ii) that for each h ∈ F there exists a uniquely determined element k ∈ (ker B)<sup>⊥</sup> = ran B<sup>∗</sup> such that Ah = Bk. Hence, the mapping h → k from F to ran B<sup>∗</sup> ⊂ G defines a linear operator C with dom C = F and one has A = BC. To show that the operator C is closed, assume that

$$h\_n \to h \quad \text{and} \quad k\_n = Ch\_n \to k,$$

where k<sup>n</sup> ∈ ran B∗. Since A and B are bounded linear operators, it follows from from Ah<sup>n</sup> = Bk<sup>n</sup> that Ah = Bk. Furthermore, ran B<sup>∗</sup> is closed, which implies that k ∈ ran B∗. Hence, k = Ch and thus C is closed. By the closed graph theorem, it follows that C ∈ **B**(F, G). The property ran C ⊂ ran B<sup>∗</sup> holds by construction. Furthermore, one sees that for ψ ∈ F

$$\begin{aligned} \|C\psi\| &= \|P\_{\text{ran}\,B^\*}C\psi\| \\ &= \|P\_{(\text{ker}\,B)^\perp}C\psi\| \\ &= \|B^{(-1)}BC\psi\| \\ &\le \rho\,\|A^{(-1)}A\psi\| \\ &= \rho\,\|P\_{(\text{ker}\,A)^\perp}\psi\| \\ &\le \rho\,\|\psi\|, \end{aligned}$$

and so C ≤ ρ.

(i) ⇒ (iii) It is clear that AA<sup>∗</sup> = BCC∗B∗. For ψ ∈ F one has

$$\left(AA^\*\psi,\psi\right) = \left\|C^\*B^\*\psi\right\|^2 \le \left\|C^\*\right\|^2 \left\|B^\*\psi\right\|^2 = \left\|C^\*\right\|^2 \left(BB^\*\psi,\psi\right)$$

and hence AA<sup>∗</sup> ≤ C<sup>∗</sup> <sup>2</sup>BB∗. Since C<sup>∗</sup> = C ≤ ρ this leads to (iii).

(iii) ⇒ (i) The mapping D from ran B<sup>∗</sup> to ran A<sup>∗</sup> given by

$$B^\*h \mapsto A^\*h, \quad h \in \mathfrak{H},$$

is well defined and linear. To see that it is well defined observe that B∗h = 0 implies AA∗h = 0 and hence A∗h = 0. Then DB∗h = A∗h for h ∈ H and one has D ∈ **B**(ran B∗,ran A∗) with DB∗h <sup>≤</sup> <sup>ρ</sup><sup>2</sup> B∗h . Due to the boundedness, this operator has a unique extension in **B**(ran B∗,ran A∗), denoted again by D, and D <sup>≤</sup> <sup>ρ</sup>2. Finally, this mapping has a trivial extension <sup>D</sup> <sup>∈</sup> **<sup>B</sup>**(G, <sup>F</sup>), satisfying DB∗h = A∗h for h ∈ H and again D <sup>≤</sup> <sup>ρ</sup>2. With <sup>C</sup> <sup>=</sup> <sup>D</sup><sup>∗</sup> one obtains that C ∈ **B**(F, G), A = BC, and C <sup>≤</sup> <sup>ρ</sup>2. In view of the construction of <sup>D</sup> one has ker B = (ran B∗)<sup>⊥</sup> ⊂ ker D, so that ran C = ran D<sup>∗</sup> ⊂ ran B∗.

For the last statement it suffices to observe that the inclusion ran A ⊂ ran B implies A = BC for some C ∈ **B**(F, G) (see (ii) ⇒ (i)). However, this implies AA<sup>∗</sup> <sup>≤</sup> <sup>ρ</sup><sup>2</sup> BB<sup>∗</sup> for some ρ > 0 (see (i) <sup>⇒</sup> (iii)). -

The next result contains a strengthening of Proposition D.3. Recall that ran C ⊂ ran B<sup>∗</sup> implies that ker C = ker A. Hence, if C maps ran A<sup>∗</sup> onto ran B∗, then C maps ran A<sup>∗</sup> onto ran C = ran B<sup>∗</sup> bijectively.

**Corollary D.4.** Assume that A ∈ **B**(F, H) and B ∈ **B**(G, H). Let ρ > 0. Then the following statements are equivalent:

(a) A = BC for some C ∈ **B**(F, G) with ran C = ran B<sup>∗</sup> and C ≤ ρ;

(b) ran A = ran B and <sup>B</sup>(−1)<sup>ϕ</sup> ≤ ρ <sup>A</sup>(−1)<sup>ϕ</sup> , ϕ ∈ ran A;

(c) <sup>ε</sup>2BB<sup>∗</sup> <sup>≤</sup> AA<sup>∗</sup> <sup>≤</sup> <sup>ρ</sup><sup>2</sup> BB<sup>∗</sup> for some <sup>0</sup> <ε<ρ.

Moreover, if ran A = ran B, then <sup>B</sup>(−1)<sup>ϕ</sup> ≤ ρ <sup>A</sup>(−1)<sup>ϕ</sup> , ϕ ∈ ran A, for some ρ > 0.

Proof. The assertions in Proposition D.3 will be freely used in the arguments below.

(a) ⇒ (c) Recall that C maps ran A<sup>∗</sup> bijectively onto ran B∗. Since

ran C<sup>∗</sup> = ran A<sup>∗</sup> and ker C<sup>∗</sup> = ker B,

one sees that C<sup>∗</sup> maps ran B<sup>∗</sup> bijectively onto ran A∗, as ran C<sup>∗</sup> is closed by Lemma D.1. Thus, CC<sup>∗</sup> is a bijective mapping from ran B<sup>∗</sup> onto itself and hence

$$(AA^\*h, h) = (CC^\*B^\*h, B^\*h) \ge \varepsilon^2 (B^\*h, B^\*h) = \varepsilon^2 (BB^\*h, h), \quad h \in \mathfrak{H}.$$

Therefore, (c) follows.

(c) <sup>⇒</sup> (b) The inequality BB<sup>∗</sup> <sup>≤</sup> <sup>ε</sup>−2AA<sup>∗</sup> implies ran <sup>B</sup> <sup>⊂</sup> ran <sup>A</sup>.

(b) ⇒ (a) It suffices to show that ran B<sup>∗</sup> ⊂ ran C. Let k ∈ ran B∗. The assumption ran B ⊂ ran A implies that Bk = Ah for some h ∈ F. Since A = BC, one sees that Bk = BCh, and hence k − Ch ∈ ker B. On the other hand, k ∈ ran B<sup>∗</sup> and Ch ∈ ran C ⊂ ran B<sup>∗</sup> yield k − Ch ∈ ran B<sup>∗</sup> = (ker B)⊥. Therefore, k = Ch and it follows that ran <sup>B</sup><sup>∗</sup> <sup>⊂</sup> ran <sup>C</sup> holds. -

An operator T ∈ **B**(F, G) is said to be a partial isometry if the restriction of T to (ker T)<sup>⊥</sup> is an isometry. The initial space is (ker T)<sup>⊥</sup> and the final space is ran T, which is automatically closed since (ker T)<sup>⊥</sup> is closed and the restriction of T is isometric. Let T ∈ **B**(F, G), then the following statements are equivalent:


The next result can be considered to be a special case of Proposition D.3 and Corollary D.4.

**Corollary D.5.** Assume that A ∈ **B**(F, H) and B ∈ **B**(G, H). Then the following statements are equivalent:


Proof. (i) ⇒ (ii) Observe that AA<sup>∗</sup> = BCC∗B∗. By assumption, C is a partial isometry with initial space ran A<sup>∗</sup> and final space ran B∗. Therefore, C<sup>∗</sup> is a partial isometry with initial space ran B<sup>∗</sup> and final space ran A∗, and it follows that CC<sup>∗</sup> is the orthogonal projection onto the final space ran B<sup>∗</sup> of C. Hence, AA<sup>∗</sup> = BB∗.

(ii) ⇒ (i) It follows from Corollary D.4 that A = BC for some C ∈ **B**(F, G) with ran C = ran B<sup>∗</sup> and C ≤ 1. Moreover

$$\| |C^\*B^\*h| \| = \| A^\*h \| = \| B^\*h \|, \qquad h \in \mathfrak{H},$$

shows that C<sup>∗</sup> is a partial isometry with initial space ran B<sup>∗</sup> and final space ran A∗. Therefore, C is a partial isometry with initial space ran A<sup>∗</sup> and final space ran B∗. -

The following polar decomposition of H ∈ **B**(F, H) can be seen as a consequence of the previous corollaries. Recall that |H∗| ∈ **B**(H) is a nonnegative operator in H defined by |H∗| = (HH∗) 1 2 .

**Corollary D.6.** Let H ∈ **B**(F, H). Then H = |H∗|C and |H∗| = HC<sup>∗</sup> for some partial isometry C ∈ **B**(F, H) with initial space ran H<sup>∗</sup> and final space ran |H∗|. In addition, one has ran H = ran |H∗|.

The following corollary is a direct consequence of Lemma D.2.

**Corollary D.7.** Let <sup>H</sup> <sup>∈</sup> **<sup>B</sup>**(H) be self-adjoint and nonnegative and let <sup>H</sup> <sup>1</sup> <sup>2</sup> be its square root. Then there are the inclusions

$$
\operatorname{ran} H \subset \operatorname{ran} H^{\frac{1}{2}} \subset \overline{\operatorname{ran}} H^{\frac{1}{2}} = \overline{\operatorname{ran}} H. \tag{D.6}
$$

Moreover, the following statements are equivalent:


in which case all the inclusions in (D.6) are identities.

Recall that the Moore–Penrose inverse of H <sup>1</sup> <sup>2</sup> is the uniquely defined inverse of the restriction

$$H^{\frac{1}{2}} \colon (\ker H^{\frac{1}{2}})^\perp = (\ker H)^\perp \to \text{ran} \, H^{\frac{1}{2}} \dots$$

In the sequel the Moore–Penrose inverse of H <sup>1</sup> <sup>2</sup> will be denoted by H(<sup>−</sup> <sup>1</sup> 2 ) , i.e.,

$$H^{(-\frac{1}{2})} := (H^{\frac{1}{2}})^{(-1)} .$$

It is clear that

$$H^{( -\frac{1}{2})} \in \mathbf{B}(\text{ran}\,H^{\frac{1}{2}}, (\text{ker}\,H)^{\perp}) \quad \Leftrightarrow \quad \text{ran}\,H^{\frac{1}{2}} \text{ is closed}.$$

Here is the version of Proposition D.3 for nonnegative operators.

**Proposition D.8.** Let A, B ∈ **B**(H) be self-adjoint and nonnegative, and let ρ > 0. Then the following statements are equivalent:


Morover, if ran A<sup>1</sup> <sup>2</sup> <sup>⊂</sup> ran <sup>B</sup> <sup>1</sup> <sup>2</sup> , then B(<sup>−</sup> <sup>1</sup> 2 ) ϕ ≤ ρ A(<sup>−</sup> <sup>1</sup> 2 ) ϕ , <sup>ϕ</sup> <sup>∈</sup> ran <sup>A</sup><sup>1</sup> <sup>2</sup> , for some ρ > 0.

## **Notes**

These pages contain a number of comments about the various results in this monograph and some further developments. We also give some historical remarks without any claim of completeness; the history of this area is complicated, also because of the limited exchange of ideas that has existed between East and West. The main setting of the book is that of operators and relations in Hilbert spaces. For an introduction and comprehensive treatment of linear operators in Hilbert spaces and related topics we refer the reader to the standard textbooks [2, 133, 141, 272, 276, 341, 348, 598, 649, 650, 651, 652, 673, 691, 723, 738, 743, 752, 756, 757]. We shall occasionally address topics that fall outside the scope of the monograph itself, e.g., the role of operators and relations in spaces with an indefinite inner product is indicated with proper references.

#### **Notes on Chapter 1**

Linear relations in Hilbert spaces go back at least to Arens [42]; they attracted attention because of their usefulness in the extension theory of not necessarily densely defined operators; see for instance [82, 128, 202, 218, 266, 297, 523, 619, 638]. Linear relations also appear in a natural way in the description of boundary conditions; cf. [2, 660]. Related material can be found in [74, 298, 406, 512, 513, 514, 515, 516, 576, 683, 685, 686]. The description of self-adjoint, maximal dissipative, or accumulative extensions of a symmetric operator (see Sections 1.4, 1.5, and 1.6) in terms of operators between the defect spaces in Theorem 1.7.12 and Theorem 1.7.14 goes back to von Neumann and Straus [ ˇ 610, 731] in the densely defined case and was worked out in the general case by Coddington [202, 203]; see also [266, 490, 733]. These descriptions may be seen as building stones for the extension theory via boundary triplets in Chapter 2.

Sections 1.1, 1.2, and 1.3 present elementary facts. The identity (1.1.10) goes back to Straus [ ˇ 726]. The notions of spectrum and points of regular type for a linear relation in Definition 1.2.1 and Definition 1.2.3 are introduced as in the operator case [268, 269, 404], while the adjoint of a linear relation in Definition 1.3.1 is introduced as in [611], see also [42, 659] and [396, 599]. Note that in Proposition 1.2.9 one sees a "pseudo-resolvent" which already shows that there is a relation in the background; cf. [421]. In Theorem 1.3.14 we present the notion of the operator part of a relation and the corresponding Hilbert space decomposition in the spirit of the Lebesgue decomposition of a relation [397, 398]; see also [439, 621, 622, 623] for the operator case. Operator parts have a connection with generalized inverses (see for instance [127, 609]), such as the Moore–Penrose inverse in Definition 1.3.17. Note that the more general definition Pker <sup>H</sup>H−<sup>1</sup> gives a Moore–Penrose inverse that is a closable operator. The special classes of relations and their transforms are developed in the usual way; see [660, 705, 706]. The presentation is influenced by personal communication with McKelvey (see also [572]) and lecture notes by Kaltenb¨ack [451] and Woracek [775, 776]; see also [365, 556, 662] for more references. Although we assume a working knowledge of the spectral theory of self-adjoint operators in Hilbert spaces, we highlight in Section 1.5 a couple of useful facts, with special attention paid to the semibounded case. Lemma 1.5.7 goes back to [210], Proposition 1.5.11 is a consequence of the Douglas lemma in Appendix D, and Lemma 1.5.12 is taken from [95]. The extension theory in the sense of von Neumann can be found in Section 1.7. Note that extensions of a symmetric relation may be disjoint or transversal. Our present characterization of these notions in Theorem 1.7.3 seems to be new; see also [391].

For the development of boundary triplets we only need a few basic facts concerning spaces with an indefinite inner product, see Section 1.8. A typical result in this respect is the automatic boundedness property in Lemma 1.8.1; see [73, 77, 235, 236]. The transform in Definition 1.8.4 goes back to Shmuljan [506, 705, 706]. The notions of strong graph convergence and strong resolvent convergence of relations are treated in Section 1.9 (see [650] for the case of self-adjoint operators). Here we prove the equivalence of the two notions when there is a uniform bound. For Corollary 1.9.5 see [755]. The parametric representation of linear relations in Section 1.10 is closely connected with the theory of operator ranges. Here we only consider such representations from the point of view of boundary value problems; see also [2, 660] and Chapter 2. However, these parametric representations also play an important role in the above mentioned Lebesgue decompositions of relations and, more generally, in the Lebesgue type decompositions of relations; cf. [397]. Most of the material in the beginning of Section 1.11 consists of straightforward generalizations of properties of the resolvent. Lemma 1.11.4 is well known for the usual resolvent and seems to be new in the generalized context; it is used in Section 2.8. Lemma 1.11.5 was inspired by [523]. Nevanlinna families and pairs are generalizations of operator-valued Nevanlinna functions; cf. Appendix A. The notion of Nevanlinna pair goes back to [617]. The notion of Nevanlinna family appears already in [523] (and in a different guise in [266]) and was later more systematically studied in [20, 21, 22, 89, 94, 95, 234, 246].

Although in this monograph we will restrict ourselves mainly to Hilbert spaces, we will make some comments in the notes concerning the setting of indefinite inner product spaces; see for instance [31, 77, 79, 142, 430, 452] and also [775, 776] for Pontryagin spaces, almost Pontryagin spaces, Kre˘ın spaces, and almost Kre˘ın spaces. The spectral theory of operators and relations in such spaces

has been studied extensively; in Kre˘ın spaces the operators are often required to have (locally) finitely many negative squares or to be (locally) definitizable, that is, roughly speaking some polynomial (or rational function) of the operator or relation under consideration is (locally) nonnegative in the Kre˘ın space sense. Here we only mention work related to the present topics [75, 78, 79, 99, 220, 222, 253, 260, 264, 268, 269, 274, 275, 434, 435, 436, 437, 497, 498, 499, 500, 501, 502, 503, 519, 522, 631, 632, 633, 634, 674, 717, 718, 719, 720, 721, 722, 764, 765].

#### **Notes on Chapter 2**

Boundary triplets were originally introduced in the works of Bruk [176, 177, 178] and Kochube˘ı [466] and were used in the study of operator differential equations in the monograph [346]; see also [509]. The notion of boundary triplet appeared implicitly in a different form already much earlier in the work of Calkin [186, 187] (see also [276, Chapter XII] and [410]). The Weyl function was introduced by Derkach and Malamud in [243, 244]; it can also be interpreted as the Q-function from the works of Kre˘ın [491, 492] (see also [679]) and, e.g., the papers [499, 500, 501, 502, 503, 504] of Kre˘ın and Langer, and [523] of Langer and Textorius. We also mention the more recent monographs [359, 691], where boundary triplets and Weyl functions are briefly discussed, as well as the very recent monograph by Derkach and Malamud [247] (in Russian), in which a more detailed exposition and more than 400 references (also many older papers published in Russian) can be found. From our point of view the most influential general papers on boundary triplets and Weyl functions area are [245, 246] and [184], the latter also has an introduction to the basic notions.

While a large part of the material in this chapter can be found in the literature, the presentation here is given in a unified form for general symmetric relations. The connection between the abstract Green identity and the appropriate indefinite inner products (see Section 1.8) is used in Section 2.1 when deriving some of the basic properties of boundary triplets and their transforms. The inverse result for boundary triplets in Theorem 2.1.9 is taken from [103, 104]. The discussion in Section 2.2 on parametric representations of boundary conditions is given to establish a connection between abstract and concrete boundary value problems. The definitions and the central properties of γ-fields and Weyl functions are presented in Section 2.3 and these results can be found in [245, 246]. In particular, Proposition 2.3.2 (part (ii)) and Proposition 2.3.6 (parts (iii), (v)) show the connection with the notion of Q-function (see [184, 245, 246]), while the formula in part (vi) of Proposition 2.3.6 is taken from [265]. Without specification of the γ-field and Weyl function, constructions of boundary triplets appear already in the early literature; see [176, 466, 577] and the monographs [346, 509]. Here the constructions are based on decompositions that are closely related to the von Neumann formulas in Section 1.7. The Weyl function in Theorem 2.4.1 coincides with the abstract Donoghue type M-function that was studied, for instance, in [317, 320, 327]. Transformations of boundary triplets and the corresponding

γ-fields and Weyl functions have been treated in [234, 237, 245, 246]. In particular, the description of boundary triplets in Theorem 2.5.1 is taken from [246]. We make the precise connection with Q-functions in Corollary 2.5.8. The present notion of unitary equivalence of boundary triplets in Definition 2.5.14 is slightly more general than the definition used in [382].

Section 2.6 is devoted to Kre˘ın's formula for intermediate extensions. The Kre˘ın type formula (for intermediate extensions) phrased in terms of boundary triplets in Theorem 2.6.1 is adapted to the present setting from [233]. This yields new simple proofs for the description of various parts of the spectra of intermediate extensions in Theorem 2.6.2. This description appears in a similar form in [243, 244, 245, 246] and is treated for dissipative extensions in [346, 468] by means of characteristic functions. The same remark applies to the proof of Theorem 2.6.5, which can be found in a different form in [184]. These results indicate the importance of Kre˘ın's formula for the spectral analysis of self-adjoint extensions of symmetric operators. Closely related is the completion problem, briefly touched upon in Remark 2.4.4, which is investigated in [383], where also an analogous boundary triplet appears for the nonnegative case. This case of extensions of a bounded symmetric operator was studied earlier in, for instance, [735, 736].

Section 2.7 is devoted to the Kre˘ın–Na˘ımark formula, which is our terminology for the Kre˘ın formula for compressed resolvents of self-adjoint exit space extensions, in short, Kre˘ın's formula for exit space extensions. Actually, it was Na˘ımark [605, 606, 607] who investigated different types of extensions (with exit) and a proper interpretation then yields the corresponding resolvent formula. For the notion of Straus family and for Kre˘ın's formula for exit space extensions we refer to ˇ [491, 492, 679] and to [726, 727, 728, 729, 730, 731, 732, 733]; some other related papers are [51, 121, 126, 219, 234, 237, 254, 266, 320, 497, 523, 554, 595, 596, 624]. The Straus families are restrictions of the adjoint in terms of " ˇ λ-dependent boundary conditions" given by a Nevanlinna family or corresponding Nevanlinna pair. When the exit space is a Pontryagin space, the same mechanism is in force and the boundary conditions are now given by a generalized Nevanlinna family or corresponding Nevanlinna pair (generalized in the sense that the family or pair has negative squares); we mention [76, 100, 111, 253, 257, 258, 259, 260, 261, 263, 290, 521, 523] and [664] for the special case of a finite-dimensional exit space. Boundary triplets for symmetric relations in Pontryagin and even in Kre˘ın spaces were introduced by Derkach (see [101, 227, 228, 229, 230]) and for isometric operators in Pontryagin spaces see [80]; cf. Chapter 6 for concrete λ-dependent boundary conditions.

The perturbation problems in Section 2.8 and Kre˘ın's formula are closely related. The Sp-perturbation result in Theorem 2.8.3 appears in a similar form in [245]. Standard (additive) perturbations of an unbounded self-adjoint operator yield an analogous situation, where the symmetric operator is maximally nondensely defined [405].

Rigged Hilbert spaces offer a framework where extension theory of unbounded symmetric operators can be developed in a somewhat analogous manner as in the bounded case; see [371, 400, 401, 746]. A closely related approach to extensions of symmetric relations relies on the concept of graph perturbations studied in [203, 204, 205, 206, 238, 264, 267, 270, 399, 403]. There has been a considerable interest in the related concept of singular perturbations, where perturbation elements belong to rigged Hilbert spaces with negative indices; see [10], which contains an extensive list of earlier literature, and see also the notes to Chapter 6 and Chapter 8 in the context of differential operators with singular potentials. General operator models for highly singular perturbations involve lifting of operators in Pontryagin spaces [117, 239, 240, 241, 252, 256, 640, 641, 707].

The interest in boundary triplets and Weyl functions has substantially grown in the last decade, so a complete list of references is beyond the scope of these notes. However, for a selection of papers in which boundary triplet techniques were applied to differential operators and other related problems we refer to [28, 50, 86, 87, 97, 112, 113, 123, 143, 144, 155, 169, 170, 173, 174, 184, 185, 241, 288, 307, 342, 343, 344, 377, 461, 475, 477, 478, 483, 511, 529, 551, 562, 563, 564, 565, 589, 590, 591, 626, 627, 642, 644, 677, 750, 751]. For boundary triplets and similar techniques in the analysis of quantum graphs see [110, 134, 172, 183, 196, 287, 289, 294, 536, 625, 637, 645, 646].

Boundary triplets and their Weyl functions for symmetric operators have been further generalized in [103, 105, 109, 110, 115, 233, 236, 237, 246] by relaxing some of the conditions in the definition of a boundary triplet. Moreover, in the setting of dual pairs of operators (see [525]) boundary triplets have been introduced in [559, 560] and applied, e.g., in [170, 173, 300, 302, 381]; they were specialized to the case of isometric operators in [561]. Boundary triplets and their extensions also occur naturally in a system-theoretic environment, where the underlying operator is often isometric, contractive, or skew-symmetric; cf. [65, 66, 67, 102, 394, 750, 751].

By applying a transform of the boundary triplet, resulting in a Cayley transform of the Weyl function, one gets a connection with the notion of characteristic function, which was used to elaborate an alternative analytic tool for studying extension theory for symmetric operators, for some literature in this direction, see, e.g., [165, 166, 246, 346, 468, 509, 603, 728].

#### **Notes on Chapter 3**

Most of the material in Section 3.1 and Section 3.2 is working knowledge from measure theory. Typical classical textbooks providing a more detailed exposition on Borel measures are [272, 335, 416, 676, 682] and the papers [61, 60, 271] are listed here for symmetric derivatives and the limit behavior of the Borel transform (see Appendix A). We also recommend the more recent monograph [738]. Section 3.3 is a brief exposition of the notions of absolutely continuous and singular spectrum of self-adjoint operators (see [649, 691, 738, 757]) in the context of self-adjoint relations; cf. Section 1.5 and Theorem 1.5.1.

Simplicity of symmetric operators and the decomposition of a symmetric operator into a self-adjoint and a simple symmetric part in Theorem 3.4.4 go back to [496]; cf. [523] for a version of this result for symmetric relations. The local version of simplicity in Definition 3.4.9 appears in papers of Jonas (see, e.g., [437]) and was more recently considered in [119].

The characterization of the spectrum via the limit properties of the Weyl function in Section 3.5 and Section 3.6 is well known for the special case of singular Sturm–Liouville differential operators and the Titchmarsh–Weyl m-function; cf. Chapter 6 and the classics [209, 740, 741] and, e.g., [60, 97, 175, 224, 321, 328, 335, 333, 334, 426, 479, 711, 712] and also [195]. This description was extended in [155] to the abstract setting of boundary triplets and Weyl functions in the case where the underlying symmetric operator is densely defined and simple (on R). These ideas where further generalized in [119, 120] and applied to elliptic partial differential operators. The present treatment in the context of linear relations seems to be new; cf. [265] for Theorem 3.6.1 (i).

Our presentation of the material in Section 3.7 is strongly inspired by the contribution [523] of Langer and Textorius. Due to the limit properties in this section there are important connections between analytic properties of the Weyl functions and the geometric properties of the associated self-adjoint extensions. The characterizations stated in Proposition 3.7.1 and Proposition 3.7.4 have been established initially in [371, 374, 379], where they were used to introduce the notions of generalized Friedrichs and Kre˘ın–von Neumann extensions for nonsemibounded symmetric operators or relations; see the notes on Chapter 5.

Finally, the translation of the results in Section 3.5 and Section 3.6 to arbitrary self-adjoint extensions in Section 3.8 based on the corresponding transformation of the Weyl function (see Section 2.5) is immediate.

#### **Notes on Chapter 4**

One of the central themes in this chapter is the construction of a Hilbert space via a nonnegative reproducing kernel; such a construction has been studied intensively after Aronszjan [59]. For some introductory texts on this topic we refer the reader to [32, 272, 277, 278, 680, 737]. It should be mentioned that there are different approaches avoiding the mechanism of reproducing kernel Hilbert spaces; see for instance Brodski˘ı [165] and for another method Sz.-Nagy and Kor´anyi [604] (sometimes called the ε-method).

Section 4.1 gives a short review of all the facts about reproducing kernel Hilbert spaces that are needed in the monograph. We remind the reader that special reproducing kernels were considered by de Branges and Rovnyak for the setting of Nevanlinna functions or Schur functions [146, 147, 148, 149, 150, 151, 152, 153]; see also [24, 25, 33].

In Section 4.2 the case of operator-valued Nevanlinna functions is taken up first. Theorem 4.2.1 states that the Nevanlinna kernel of a Nevanlinna function M is nonnegative; the proof can be found in [450]. In this case the reproducing kernel approach leads to the realization of the function <sup>−</sup>(M(λ)+λ)−<sup>1</sup> in terms of compressed resolvents in Theorem 4.2.2; see [231, Theorem 2.5 and Remark 2.6] and also [409]. The unitary equivalence of different models in Theorem 4.2.3 is based on a standard argument. If the Nevanlinna function is uniformly strict, then it can be regarded as a Weyl function; see Theorem 4.2.4 as follows by an inspection of the proof of Theorem 4.2.2. The uniqueness of the model in Theorem 4.2.6 is obtained by a further specialization of the proof of Theorem 4.2.3.

Section 4.3 presents a reproducing kernel approach for scalar Nevanlinna functions via their integral representation (see Appendix A) as can be found in [246]; cf. [19, 379, 530]. In the present monograph only the case of scalar Nevanlinna functions is treated in this way; the matrix-valued case is a straightforward generalization. For a rigorous treatment involving operator-valued Nevanlinna functions one should apply [558].

Parallel to the above treatment of operator-valued Nevanlinna functions the case of Nevanlinna families or Nevanlinna pairs is taken up in Section 4.4. The nonnegativity of the Nevanlinna kernel for a Nevanlinna family or Nevanlinna pair in Theorem 4.4.1 is proved via a simple reduction argument. The representation model for Nevanlinna pairs in Theorem 4.4.2 is again given along the lines of Derkach's work [231]; see also [265]. In fact, the proof can be carried forward to show that any Nevanlinna family is the Weyl family of a boundary relation (see the notes for Chapter 2). Another reproducing kernel approach was followed in [95, Theorem 6.1] by a reduction to the corresponding case of Schur functions; cf. [152, 153]. Closely connected is the notion of generalized resolvents given in Definition 4.4.5. It was formalized by McKelvey [572], see also [207]. Its representation in Corollary 4.4.7 is equivalent to the representation of Nevanlinna pairs in Theorem 4.4.2. In Corollary 4.4.9 we present Na˘ımark's dilation result [606, 607] as a straightforward consequence of the characterization of generalized resolvents in Theorem 4.4.8; cf. [659].

Our construction of the exit space extension in Theorem 4.5.2 uses the above representation of generalized resolvents; cf. [497]. The identity in (4.5.3) gives the precise connection between the exit space and the Nevanlinna pair leading to the exit space. Closely related is an interpretation via the coupling method in Section 4.6. By taking an "orthogonal sum of two boundary triplets" as in Proposition 4.6.1 one obtains an exit space extension for one of the original symmetric relations whose compressed resolvent is described in Corollary 4.6.3. This leads to an interpretation of the Kre˘ın formula when the parameter family coincides with a uniformly strict Nevanlinna function; see [373, 375, 376, 630]. However, a similar interpretation exists when the parameter family is a general Nevanlinna family, once it is interpreted as the Weyl family of a boundary relation; see [236, 237, 238]. It has already been explained in the notes on Chapter 2 that it is quite natural to have exit spaces which may be indefinite, in which case the Nevanlinna family must be replaced by a more general object.

And, indeed, more general reproducing kernel spaces can be constructed. For the Pontryagin space situation and reproducing kernels with finitely many negative squares one may consult the monograph [23] and the papers [89, 232, 409, 497, 499, 500]; see also [454, 455, 456, 457, 458, 459]. For the setting of almost Pontryagin spaces, see [452, 775, 777]. Finally, for Kre˘ın spaces we refer to [15, 16, 17, 262, 434, 435, 437, 778].

#### **Notes on Chapter 5**

The approach to semibounded forms in Section 5.1 follows the lines of Kato's work [462], where densely defined semibounded and sectorial forms are treated; see also [2, 49, 246, 346, 359, 552, 649, 650, 690, 691]. An arbitrary form is not necessarily closable, but there is a Lebesgue decomposition into the sum of a closable form and a singular form; see [395, 397, 404, 476, 710]. Versions of the representation theorems for nondensely defined closed forms appear in [14, 44, 393, 661, 710]; see Theorem 5.1.18 and Theorem 5.1.23. Nondensely defined forms appear, for instance, when treating sums of densely defined forms, leading to a representation problem for the form sum; see [295, 296, 389, 390, 392]. The ordering of semibounded forms and semibounded self-adjoint relations in Section 5.2 results in Proposition 5.2.7; the present simple treatment via the Douglas lemma extends [390]; see Proposition 1.5.11. The characterization in terms of the corresponding resolvent operators goes back to Rellich [654]; see also [210, 390, 412, 462]. For the monotonicity principle (Theorem 5.2.11) see [96], and for similar statements [83, 245, 246, 618, 661, 709, 710, 753].

The main properties of the Friedrichs extension, namely Theorem 5.3.3 and Proposition 5.3.6, follow from the present approach via forms; cf. [202, 370, 734]. The square root decomposition approach involving resolvents to describe those semibounded self-adjoint extensions which are transversal with the Friedrichs extension extends [391] (for the nonnegative case). The transversality criterion in Theorem 5.3.8 is an extension of Malamud's earlier result [553]. The introduction of Kre˘ın type extensions in Definition 5.4.2 is inspired by [210]. The minimality of the Kre˘ın type extension is inherited from the maximality of the Friedrichs extension and this yields a characterization of all semibounded extensions whose lower bounds belong to a certain prescribed interval in Theorem 5.4.6. This is the semibounded analog of a characterization of nonnegative self-adjoint extensions in the nonnegative case [210]. The interpretation of the Friedrichs and Kre˘ın type extensions as strong resolvent limits in Theorem 5.4.10 goes back in the operator case to [34] and in the general case to [383]. The present treatment is based on the general monotonicity principle in Theorem 5.2.11. The concept of a positively closable operator (see (iii) of Corollary 5.4.8) was introduced in [34] to detect the nonnegative self-adjoint operator extensions. This notion is closely connected with [490], where self-adjoint operator extensions were determined. Closely related to the Friedrichs and the Kre˘ın–von Neumann extensions is the study of all extremal extensions of a nonnegative or a sectorial operator in [43, 44, 45, 46, 47, 53, 55, 383, 389, 391, 392, 393, 510, 648, 702, 703].

If a symmetric relation is semibounded, then a boundary triplet can be chosen so that A<sup>0</sup> is the Friedrichs extension and A<sup>1</sup> is a semibounded self-adjoint extension (for instance, a Kre˘ın type extension, when transversality is satisfied). This

type of boundary triplet serves as an extension of the notions of positive boundary triplets introduced in [43, 467]. The main part of the results in Section 5.5 when specialized to the nonnegative case, reduce to the results in [245, 246]. A result similar to Proposition 5.5.8 (ii) can be found in [349, 355], see also the more recent contribution [362] that deals with elliptic partial differential operators on exterior domains. We remark that the sufficient condition in Lemma 5.5.7 is also necessary for the extension A<sup>Θ</sup> to be semibounded for every semibounded self-adjoint relation Θ in G; see [245, 246]. For an example where Θ is a nonzero bounded operator with arbitrary small operator norm Θ , while A<sup>Θ</sup> is not semibounded from below, see [378]. In the context of elliptic partial differential operators on bounded domains the boundary triplet in Example 5.5.13 appears implicitly already in Grubb [352]; see also [169, 359, 557] and the notes to Chapter 8 for more details.

The first Green identity appearing in Theorem 5.5.14 establishes a link with the notion of boundary pairs for semibounded operators and the corresponding semibounded forms in Section 5.6. Definition 5.6.1 of a boundary pair for a semibounded relation S is adapted from Arlinski˘ı [44], see also [50] for the context of generalized boundary triplets. This notion makes it possible to describe the closed forms associated with all semibounded self-adjoint extensions of a given semibounded relation S by means of closed semibounded forms in the parameter space. In particular, Theorem 5.6.11 contains the description of all nonnegative closed forms generated by the nonnegative self-adjoint extensions in [53]; for a similar result in the case of maximal sectorial extensions see [44]. By connecting boundary pairs with compatible boundary triplets (see Theorem 5.6.6 and Theorem 5.6.10) one obtains an explicit description of the forms associated with all semibounded self-adjoint extensions by means of the boundary conditions, see Theorem 5.6.13.

We remind the reader of the study of nonnegative self-adjoint extensions of a densely defined nonnegative operator as initiated by von Neumann [610]; see [280, 309, 310, 724, 773] and the papers by Kre˘ın [491, 492, 493, 494]. Kre˘ın's analysis was complemented by Vishik [747] and Birman [139]; see also [2, 14, 58, 157, 217, 295, 346, 347, 352, 354, 372, 464, 714]. For an operator matrix completion approach see [383, 472], see also [81] for an extension in a Pontryagin space setting. This approach has its origin in the famous paper of Shmuljan [704]. The notion of boundary pairs for forms can also be traced back to Kre˘ın [493, 494], Vishik [747], and Birman [139]. Some further developments can be found, e.g., in [36, 43, 44, 47, 48, 54, 56, 57, 248, 301, 357, 359, 363, 553, 555, 556], where also accretive and sectorial extensions, and associated closed forms are treated, and see also [552, 580, 581, 582, 636, 725]. A related concept of boundary pairs designed for elliptic partial differential operators and corresponding quadratic forms was recently proposed by Post [647].

A study of symmetric forms which are not semibounded and concepts of generalized Friedrichs and Kre˘ın–von Neumann extensions goes beyond the scope of the present text. However, an interested reader may consult in this direction, e.g., [125, 215, 216, 303, 306, 364, 371, 374, 379, 570, 571].

#### **Notes on Chapter 6**

The main textbooks in this area are [2, 26, 72, 208, 281, 284, 414, 420, 438, 489, 542, 574, 608, 663, 724, 740, 741, 754, 757, 782]. The Sturm–Liouville differential expressions that we consider here have coefficients which are assumed to satisfy some weak integrability conditions giving rise to quasiderivatives (where under stronger smoothness conditions ordinary derivatives would suffice); see for instance [724] where already such quasiderivatives were used. Most of the material concerning singular Sturm–Liouville equations has been stimulated by the work of Weyl [758, 759, 760]; see also [761]. For later work on operators of higher order see, for instance, [200, 201, 465, 469, 470].

Section 6.1 and Section 6.2 contain standard material, where we follow parts of [414, 757]; for the quasiderivatives in the limit-circle case we refer to [312]. The treatment of the regular and limit-circle cases in Section 6.3 is straightforward; see [312] and also [12] for a boundary triplet in the limit-circle case. For an alternative description of boundary conditions and Weyl functions in the regular case we mention the recent papers [197, 199, 332] using the concept of boundary data maps. The limit-point case is treated in Section 6.4. In the proof of Proposition 6.4.4 concerning the simplicity of the minimal operator we follow the arguments of R.C. Gilbert [336]; cf. [263]. The treatment of the Fourier transform in Lemma 6.4.6 and Theorem 6.4.7 uses the theory in Appendix B. The surjectivity is direct in this case. The statement in Lemma 6.4.8 that the Fourier transform provides a model for the Weyl function uses an argument provided in [281].

In general, interface conditions are a tool to paste together several boundary value problems. The interface conditions which we consider in Section 6.5 make it possible in the case of two endpoints in the limit-point case to consider minimal operators [339, 740] and to apply the coupling of boundary triplets as explained in Section 4.6. This automatically leads to some kind of exit space extensions. As this goes beyond the present text, we only refer to [336, 338, 441, 713].

The present theory of subordinate solutions in Section 6.7 goes back to D.J. Gilbert and D.B. Pearson [335]; see also [333, 334, 463]. Further contributions can be found in [433, 533, 655, 656, 657]; see also [738]. Our presentation is modelled on [388].

If the minimal operator is semibounded one can apply the form methods from Chapter 5. In the regular case the treatment of the forms associated with the Sturm–Liouville expression in Section 6.8 is based on the inequality in Lemma 6.8.2 and Theorem 5.1.16; see [212, 493, 494, 757]. This makes it possible to introduce boundary pairs which are compatible with the given boundary triplet. Section 6.9 and Section 6.10 contain preparatory material so that boundary pairs can be introduced also when the endpoints are singular. Section 6.9 on Dirichlet forms is inspired by Rellich [654] and Kalf [448]. The proof of Lemma 6.9.7 seems to be new. Section 6.10 is concerned with principal and nonprincipal solutions; cf. [198, 368, 369, 535] and, in particular, [616]. The Hardy type inequalities in Lemma 6.10.1 go back to [449]. The treatment is in some sense folklore, but Theorem 6.10.9 seems to be new. Our handling of the regular case in Section 6.8 serves as model for the semibounded singular cases in Section 6.11 and Section 6.12. These two sections are influenced by Kalf [448]; see also [671, 672] and a recent contribution [168]. The determination of the Friedrichs extension in our treatment goes hand in hand with the boundary pair; see also [310, 311, 508, 597, 600, 601, 615, 616, 654, 661, 671, 672, 779, 782].

The case of an integrable potential on a half-line is treated in Section 6.13. It appears already in [740], where the corresponding spectral measure is determined. The construction of solutions with a given asymptotic behavior can be found for instance in [542, 740]. The example with the P¨oschl–Teller potential at the very end of Section 6.13 can be found in, e.g., [2].

There is a large amount of literature devoted to special topics. For λ-dependent boundary conditions we just refer to [117, 313, 314, 675, 687, 688, 689, 749] and [261, 263] for Pontryagin exit spaces. The determination of the Kre˘ın–von Neumann extension and other nonnegative extensions can be found in [212, 350]. For singular perturbations associated with Sturm–Liouville operators, see for instance [331] and the later papers [9, 28, 29, 124, 171, 182, 282, 315, 316, 479, 480, 481, 482, 507, 531, 532, 549, 550]; for δ-point interactions we refer to [8, 293]. Special properties of the Titchmarsh–Weyl coefficient have been studied in many papers; we just mention [130, 292, 366, 367]. Already early in the 20th century there was an interest in boundary value problems with conditions at interior points and, more general, integral boundary conditions; for instance, see [135, 583] and for a later review [762]. It was pointed out by Coddington [203, 204] that such conditions can be described in tems of relation extensions of a restriction of the usual minimal operator; see for a brief selection also [188, 189, 190, 203, 204, 205, 206, 238, 264, 267, 270, 399, 403, 484, 485, 486, 697, 784].

The topic of semibounded self-adjoint extensions in Section 5.5 and Section 5.6 is closely related to problems that are called left-definite in the literature; we only mention [13, 131, 473, 474, 488, 545, 698, 699, 708, 748] and the references therein. Another case of interest is when the weight function in the Sturm–Liouville equation changes sign, in which case Kre˘ın spaces come up naturally: the following is just a limited selection [116, 122, 123, 138, 223, 225, 304, 305]. Under certain circumstances the Weyl function in the limit-point case (of a usual Sturm–Liouville operator) belongs to the Kac class; see for instance [420]. For further results in this direction, see [384, 385, 386, 387]; in these cases there is a distinguished selfadjoint extension, namely the generalized Friedrichs extension; see the notes on Chapter 5. Sturm–Liouville equations with vector-valued coefficients are beyond the scope of this text; see [345, 346] and for a recent contribution [330].

#### **Notes on Chapter 7**

Canonical systems of differential equations are discussed in the monographs [72, 340, 653, 681]. The spectral theory for such systems has been developed in varying degrees of generality in an abundance of papers [62, 63, 64, 85, 161, 162, 163, 213, 263, 265, 279, 422, 423, 424, 425, 427, 428, 487, 575, 592, 612, 613, 614, 620, 667, 668, 669, 687, 688, 689, 692, 693, 694, 695, 696]. In [520] there is a general procedure to reduce systems to a canonical form. Many investigations concerning boundary problems can be written in terms of canonical systems; see for a general procedure [619, 693], so that for instance [129, 211] can be subsumed. As a particular example we mention the system of second-order differential equations studied in the dissertation of a student of Titchmarsh [191, 192, 193, 194]. Frequently conditions are imposed on the canonical system so that there is a full analogy with the usual operator treatment. However, in general one is confronted with relations.

The first treatment of canonical systems in terms of relations is due to Orcutt [619]; see also [97, 408, 471, 524, 538] and [11, 593, 594]. In our opinion the present case of 2×2 systems already serves as a good illustration of the various phenomena that may occur. The interest in 2×2 systems is justified by the work of de Branges [146, 147, 148, 149]; see also [278, 504, 528, 715, 716, 766, 767, 768, 769, 770, 771, 772]. Via these systems one may also approach Sturm–Liouville equations with distributional coefficients as, for instance, in [281, 283]. In Remling [658] and the forthcoming book [666] by Romanov our systems are treated with the de Branges results in mind. For a number of topics we rely on the lecture notes by Kaltenb¨ack and Woracek [460].

Sections 7.1, 7.2, and 7.3 contain preparatory material. The inequalites (7.1.1) and (7.1.3) are standard for Bochner integrals. The construction of the Hilbert space L<sup>2</sup> <sup>Δ</sup>(ı) follows the treatment in [460]; see also [276, 670]. For further information concerning these and more general spaces, see [440] and the expositions in [2, 276]. The treatment of the square-integrable solutions in Section 7.4 is based on the monotonicity principle in Theorem 5.2.11; cf. [97]. For a different treatment, see for instance [612]. Section 7.5 is devoted to definite systems. The present notion of definiteness can be found in [340, 619]; in the literature sometimes a more restrictive form of definiteness is used. Proposition 7.5.4 can be found in [614, Hilfsatz (3.1)] and [471]; for a more abstract treatment, see [98]. The modification of solutions is avoided in [619]; the present argument in Proposition 7.5.6 seems to be new. The minimal and maximal (multivalued) relations associated with canonical systems can be found in Section 7.6. They were originally introduced by Orcutt [619]; see also Kac [442, 443, 444, 445] and [408]. The extension theory for them naturally involves (multivalued) relations. In [97] it is indicated how all such systems fit in the boundary triplet scheme; see also [471, 538].

In Section 7.7 it is assumed that the endpoints are (quasi)regular, in which case a boundary triplet is constructed in Theorem 7.7.2. The resolvent of the self-adjoint extension ker Γ<sup>0</sup> is an integral operator whose kernel belongs to the Hilbert–Schmidt class. In this way we can show that the operator part of the minimal relation is simple. In Section 7.8 it is assumed that one of the endpoints is in the limit-point case. We prove the simplicity of the operator part of the minimal relation along the lines of [337], cf. [263] (see also Chapter 6 for the Sturm– Liouville case). The treatment of the Fourier transform in the limit-point case uses

Appendix B and parallels the treatment in Chapter 6. Subordinate solutions for canonical systems are introduced in Section 7.9; cf. [388]. Here again we follow the results for the Sturm–Liouville case in Section 6.7; see also the corresponding notes in Chapter 5, where the appropriate references can be found.

The discussion of the special cases in Section 7.10 is just an indication of the possibilities; we could also pay attention to, for instance, the so-called Dirac systems. The connection with the Sturm–Liouville case and its generalization in [281] is only briefly indicated. The special case in Theorem 7.10.1 is modelled on [460]. For the connection of canonical systems with strings see, for instance, [453]. Finally, we would like to mention that the approach via boundary triplets allows also λ-dependent boundary conditions; for some related papers we refer to [221, 263, 265, 675, 687, 688, 689, 742].

#### **Notes on Chapter 8**

The notion of Gelfand triples or riggings of Hilbert spaces in Section 8.1 is often used in the treatment of partial differential equations and Sobolev spaces, and can be found in a similar form in Berezanski˘ı's monograph [133] (see also the textbook [774]); cf. [132] and the contributions [534, 537] by Lax and Leray. There are many well-known textbooks on Sobolev spaces, among which we mention here only the monographs [3, 5, 164, 136, 284, 291, 351, 569, 584, 744, 783]. In our opinion a very useful source for trace maps, the Green identities, and similar related results (also for nonsmooth domains) is the monograph [573] by McLean, see also the list of references therein. For the description of the spaces Hs(∂Ω) in Corollary 8.2.2 as domains of powers of the Laplace–Beltrami operator on ∂Ω see [322, 544, 567]. In some cases it is also convenient to use powers of a Dirichlet-to-Neumann map, as in [114].

The discussion of the minimal and maximal operators in Section 8.3 is standard, e.g., Proposition 8.3.1 can be found in Triebel's textbook [745]. The H2 regularity in Theorem 8.3.4 for smooth domains can be found in [167] and [84], see also [4, 308, 544, 517] or [1, Theorem 7.2] for a recent very general result on the H2-regularity of the Neumann operator (and more general) on certain classes of nonsmooth bounded and unbounded domains. As explained in Section 8.7, for the class of Lipschitz domains the H2-regularity of the Dirichlet and Neumann Laplacian up to the boundary fails and has to be replaced by the weaker H3/2 regularity; cf. [431, 432] and [92, 115, 323, 324, 325, 326] for some recent closely related works dealing with Schr¨odinger operators on Lipschitz domains. In this context we also mention the papers [585, 586, 587, 588] by Mitrea and Taylor for general layer potential methods in Lipschitz domains on Riemannian manifolds. It is worth mentioning that the resolvent of the Neumann Laplacian in Proposition 8.3.3 is not necessarily compact if the boundary of the bounded domain Ω is not of class C<sup>2</sup> (or not Lipschitz), see [415] for a well-known counterexample using a rooms-and-passages domain. For similar unusual spectral properties we also mention Example 8.4.9 on the essential spectrum of self-adjoint realizations of the

Laplacian on a bounded domain, which however is very different from a technical point of view. In a related context the existence of self-adjoint extensions with prescribed point spectrum, absolutely continuous, and singular continuous spectrum in spectral gaps of a fixed underlying symmetric operator was also discussed in [6, 7, 154, 156, 158, 159, 160]. For our purposes Theorem 8.3.9 and Theorem 8.3.10 play an important role; for the case of C∞-smooth domains such results can be found in [544], see also [352, 354]. The present versions of the extension theorems are inspired by slightly different considerations in [115]; the variant for Lipschitz domains in Theorem 8.7.5 can be proved by means of a similar technique (see also [92] for a comprehensive discussion). We remark that the definition and topologies on the spaces G<sup>0</sup> and G<sup>1</sup> in (8.7.4)–(8.7.5) from [92, 115] are partly inspired by abstract considerations in [246] for generalized boundary triplets. Concerning Section 8.3, as a final comment we mention that for more general second-order elliptic operators in bounded and unbounded domains Proposition 8.3.13 can be found in [118, 119, 120].

The boundary triplet in Theorem 8.4.1 can also be found in, e.g., [105, 169, 173, 359, 557, 643] and extends with some simple modifications to second-order and 2m-th order elliptic operators with variable coefficients. In a different form this boundary triplet is already essentially contained in the well-known work of Grubb [352], where all closed extensions of a minimal operator elliptic partial differential operator were characterised by nonlocal boundary conditions; see also the early contribution [747] by Vishik and the fundamental paper [140] by Birman. In fact, it seems that the more recent paper [27] by Amrein and Pearson on a generalisation of Weyl–Titchmarsh theory for Schr¨odinger operators inspired many operator theorists to investigate partial differential operators from an extension theory point of view. Various papers based on boundary triplet techniques and related methods were published in the last decade; besides those papers listed before we mention as a selection here [18, 90, 91, 103, 107, 108, 251, 323, 324, 360, 429, 566, 568, 635, 647, 678] in which, e.g., Dirichlet-to-Neumann maps and Kre˘ın type resolvent formulas are treated. For self-adjoint realizations of the Laplacians, Schr¨odinger operators, and more general second-order elliptic differential operators in nonsmooth domains we refer to, e.g., [1, 92, 115, 326, 358]. Particular attention has been paid to the spectral properties of realizations with local and nonlocal Robin boundary conditions in [41, 106, 179, 180, 226, 325, 361, 413, 543, 628, 629, 665]. Furthermore, the recent contributions [35, 36, 37, 38, 39, 40] by Arendt, ter Elst, and coauthors form an interesting series of papers on elliptic differential operators and Dirichlet-to-Neumann operators based mainly on form methods.

The present treatment of semibounded Schr¨odinger operator in Section 8.5 with the help of boundary pairs and boundary triplets seems to be new. However, the problem of lower boundedness was also discussed with different methods in [349, 355, 362]. The Kre˘ın–von Neumann extension which is of special interest in this context was investigated in, e.g., [14, 68, 69, 70, 71, 93, 181, 356, 362, 578, 579]. For coupling methods for elliptic differential operators based on boundary triplet

techniques in the spirit of Section 8.6 we refer to the recent paper [88], where also an abstract version of the third Green identity was proved. Finally, we mention that various classes of λ-dependent boundary value problems for elliptic operators can be treated in such a context, see, e.g., [87, 103, 137, 285, 286].

#### **Notes on Appendices A–D**

The appendices were included for the convenience of the reader. Here we collect some notes for each of the appendices A–D.

In Appendix A we present the basic properties of Nevanlinna functions that are needed in the text. The Stieltjes inversion formula for the Borel transform is folklore. The integral representation of scalar Nevanlinna functions is derived in a classical way involving the Helly and Helly–Bray theorems; see [72, 763]. The inversion formula in Lemma A.2.7 is standard; see [272, 446]. For the present purposes it suffices to approach the integration with respect to operator-valued measures in terms of improper Riemann integrals. For the operator measure version see for instance [691]. For Kac functions, Stieltjes functions, and inverse Stieltjes functions see [49, 52, 329, 407, 446, 447, 505]. As far as we know, the proof of Proposition A.5.4 is new; cf. [572]. For related work see [318, 319, 329, 602]. The case of generalized Nevanlinna functions was initiated in the works of Kre˘ın and Langer [499, 500]; see [145, 225, 232, 239, 242, 255, 380, 411, 434, 435, 437, 546, 547, 548] for further developments.

For the general notion of Fourier transforms in Appendix B associated with Sturm–Liouville equations, see [539, 540, 542, 700, 701, 740, 780, 781]. Our basic idea here is inspired by the treatment in [208]. The arguments establishing the surjectivity of the Fourier transform are also inspired by [281, 715, 716]. Fourier transforms that are partially isometric go back at least to [204] in a treatment connected with multivalued operators. The discussion in this section has connections with the treatment of Kre˘ın's directing functionals in [495, 518, 526, 527].

Necessary and sufficient conditions (as in Appendix C) for the sum of closed subspaces have a long history. The concept of opening of a pair of subspaces goes back to Friedrichs and Dixmier; see for instance [214, 249, 250]. Lemma C.4 goes back to [246].

The main reference for Appendix D is the paper by Douglas [273]; see also [30, 299]. There have been many generalizations of the Douglas paper. We only mention a particular direction of extension, namely [639, 684]; see also [402]. The application in Proposition 1.5.11 in Chapter 1 has the same flavor, but is obtained by reduction to the case in this appendix; see [390].

## **Bibliography**


## **List of Symbols**


For a linear relation H the following notation is used:


© The Editor(s) (if applicable) and The Author(s) 2020 J. Behrndt et al., *Boundary Value Problems, Weyl Functions, and Differential Operators*, Monographs in Mathematics 108, https://doi.org/10.1007/978-3-030-36714-5



## **Index**

absolutely continuous, 366, 503 absolutely continuous closure, 180 antitonicity, 303 Borel transform, 172, 631 boundary pair, 344 space, 115 triplet, 107 boundary conditions Dirichlet, 390, 593 essential, 360 λ-dependent, 425 natural, 360 Neumann, 390, 593 periodic, 394 Robin, 609 separated, 394 bounded operator, 15 C<sup>2</sup>-domain, 583 C<sup>2</sup>-hypograph, 583 canonical system definite, 522 homogeneous, 504 inhomogeneous, 504 real, 509 regular, 510 singular, 510 solution, 504 trace-normed, 573 Cayley transform, 22 compatible, 345 completely non-self-adjoint, 190 completion problem, 130 compressed resolvent, 156 continuous closure, 180 contractive, 16

core, 55, 290, 298 coupling (orthogonal), 274, 414, 616 defect, 25 defect number, 44 dilation theorem, 269 Dirichlet -to-Neumann map, 594, 599 operator, 591 trace operator, 586, 595 disjoint, 68 distribution (regular), 587 Douglas lemma, 701 dual pairing, 578 elliptic regularity, 593 essential closure, 180 exit space, 158 extension, 13, 345 finite-rank perturbation, 163 flip-flop operator, 30 form bounded from below, 282 closable, 286 closed, 285 closure, 287 domain, 282 extension, 282 inclusion, 282 lower bound, 282 nonnegative, 282 quadratic, 282 restriction, 282 semibounded, 283 sesquilinear, 282 sum, 282

symmetric, 282

© The Editor(s) (if applicable) and The Author(s) 2020 J. Behrndt et al., *Boundary Value Problems, Weyl Functions, and Differential Operators*, Monographs in Mathematics 108, https://doi.org/10.1007/978-3-030-36714-5

t-convergence, 284 Fourier transform, 406, 419, 552, 581, 678, 685 Friedrichs extension, 312 functional calculus, 49 fundamental matrix, 507 fundamental system, 370

γ-field, 118 gap, 697 Gelfand triple, 578 generalized resolvent, 266 graph, 12 Green identity, 369 abstract, 108 abstract first, 341 first, 444, 586 second, 587, 598 Gronwall's lemma, 487

Hilbert–Schmidt operator, 537

indefinite inner product, 75 integrable potential, 483 interface condition, 412 intermediate extension, 68 inverse Cayley transform, 22 inverse Stieltjes function, 669 isometric, 16, 76

Kac function, 663 kernel holomorphic, 224 nonnegative, 224 symmetric, 224 uniformly bounded, 224 KLMN theorem, 290 Kre˘ın -Na˘ımark formula, 159 -von Neumann extension, 321 formula, 148 space, 75 type extension, 321

Lagrange identity, 108, 369, 505 Lebesgue decomposition, 39, 170 limit-circle, 374, 518 limit-point, 374, 518

Lipschitz domain, 625 Lipschitz hypograph, 625 M¨obius transform, 20, 78 maximal operator, 380, 589 maximal relation, 525 measure absolutely continuous, 170 Borel, 169 derivative, 170 finite, 170 growth point, 178 minimal support, 171 operator-valued, 652 pure point, 170 regular, 170 singular, 170 singular continuous, 170 support, 171 symmetric derivative, 173 minimal operator, 381, 589 minimal relation, 526 model (minimal), 237 monotonicity principle, 307, 310 Moore–Penrose inverse, 41, 699 Neumann operator, 591 Neumann trace operator, 586, 598 Nevanlinna family, 100 function, 638, 655 kernel, 235, 261 pair, 102 nonoscillatory, 442 nonprincipal solution, 459 opening, 695 operator, 12 -valued integral, 645 part, 40 range, 88 orthogonal companion, 75 oscillatory, 442 parameter space, 115 parametric representation, 87 Pl¨ucker identity, 368

#### Index 771

Poincar´e inequality, 585 point of regular type, 23 preminimal operator, 381, 589 preminimal relation, 526 principal solution, 459 Q-function, 139 quasi-derivative, 385 quasiregular endpoint, 510 regular endpoint, 367, 510 regular part, 39 relation accumulative (maximal), 58 adjoint relation, 30, 74 bounded from below, 44 closable operator, 16 closed, 16 closure, 16 componentwise sum, 12 dissipative (maximal), 58 domain, 12 inverse, 12 kernel, 12 lower bound, 44 multivalued part, 12 nonnegative, 44 ordering, 301 orthogonal sum, 15 product, 14 range, 12 self-adjoint, 42 semibounded, 44 square root, 54 sum, 14 symmetric (maximal), 42 representation theorem, 293, 299 reproducing kernel Hilbert space, 224 reproducing kernel property, 225 resolvent identity, 17, 96 operator, 17, 96 relation, 17 set, 23 restricted boundary triplet, 142 restriction, 13 rigged Hilbert space, 578

Schatten–von Neumann ideal, 165 Schr¨odinger operator, 588, 616 self-adjoint extension, 43 simple, 190 simple with respect to Δ, 193 singular continuous subspace, 185 endpoint, 368, 510 part, 39 value, 165 Sobolev space, 581 solution matrix, 506 spectral function, 49 measure, 49 projection, 49 spectrum absolutely continuous, 186 continuous, 23 discrete, 48 essential, 48 point, 23 residual, 23 singular, 186 singular continuous, 186 square-integrable with respect to Δ, 501 Stieltjes inversion formula, 631, 641 Stone's formula, 50, 634 Straus family, ˇ 156 strong graph convergence, 80 graph limit, 80 resolvent convergence, 81 resolvent limit, 81 Sturm–Liouville operator, 366 subordinate solution, 425, 560 subspace absolutely continuous, 185 continuous, 186 defect, 46 hypermaximal neutral, 75 invariant, 188 neutral, 75 nonnegative, 75 nonpositive, 75

pure point, 185 singular, 186

tight, 88 trace map, 586 trace operator, 586 transformation, 134 transversal, 68

uniformly strict, 663 unitarily equivalent, 33 , 145 unitary, 33 , 76

variation of constants, 370 , 507 von Neumann's first formula, 71 von Neumann's second formula, 71 , 73

Weyl function, 121 Weyl's alternative, 378 , 518 Wronskian determinant, 368